Sie sind auf Seite 1von 146

Technische Universitt

Braunschweig
CSE Computational Sciences in Engineering
An International, Interdisciplinary, and Bilingual Master of Science Programme
Introduction to
Continuum Mechanics

Vector and Tensor Calculus


Winter Semester 2002 / 2003
Franz-Joseph Barthold
1
Jrg Stieghan
2
22nd October 2003
1
Tel. ++49-(0)531-391-2240, Fax ++49-(0)531-391-2242, email fj.barthold@tu-bs.de
2
Tel. ++49-(0)531-391-2247, Fax ++49-(0)531-391-2242, email j.stieghan@tu-bs.de
Herausgeber
Prof. Dr.-Ing. Franz-Joseph Barthold, M.Sc.
Organisation und Verwaltung
Dipl.-Ing. Jrg Stieghan, SFI
CSE Computational Sciences in Engineering
Technische Universitt Braunschweig
Bltenweg 17, 38 106 Braunschweig
Tel. ++49-(0)531-391-2247
Fax ++49-(0)531-391-2242
email j.stieghan@tu-bs.de
c _2000 Prof. Dr.-Ing. Franz-Joseph Barthold, M.Sc.
und
Dipl.-Ing. Jrg Stieghan, SFI
CSE Computational Sciences in Engineering
Technische Universitt Braunschweig
Bltenweg 17, 38 106 Braunschweig
Alle Rechte, insbesondere das der bersetzung in fremde Sprachen, vorbehalten. Ohne Geneh-
migung der Autoren ist es nicht gestattet, dieses Heft ganz oder teilweise auf fotomechanischem
Wege (Fotokopie, Mikroskopie) zu vervielfltigen oder in elektronische Medien zu speichern.
Abstract
Zusammenfassung
Preface
Braunschweig, 22nd October 2003 Franz-Joseph Barthold and Jrg Stieghan
Contents
Contents VII
List of Figures IX
List of Tables XI
1 Introduction 1
2 Basics on Linear Algebra 3
2.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6 Normed Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.7 Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.8 Afne Vector Space and the Euclidean Vector Space . . . . . . . . . . . . . . . . 28
2.9 Linear Mappings and the Vector Space of Linear Mappings . . . . . . . . . . . . 32
2.10 Linear Forms and Dual Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . 36
3 Matrix Calculus 37
3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.2 Some Basic Identities of Matrix Calculus . . . . . . . . . . . . . . . . . . . . . 42
3.3 Inverse of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4 Linear Mappings of an Afne Vector Spaces . . . . . . . . . . . . . . . . . . . . 54
3.5 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.6 Matrix Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4 Vector and Tensor Algebra 75
4.1 Index Notation and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2 Products of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.3 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.4 Transformations and Products of Tensors . . . . . . . . . . . . . . . . . . . . . . 101
VII
VIII Contents
4.5 Special Tensors and Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.6 The Principal Axes of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.7 Higher Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5 Vector and Tensor Analysis 131
5.1 Vector and Tensor Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.2 Derivatives and Operators of Fields . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.3 Integral Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6 Exercises 159
6.1 Application of Matrix Calculus on Bars and Plane Trusses . . . . . . . . . . . . 162
6.2 Calculating a Structure with the Eigenvalue Problem . . . . . . . . . . . . . . . 174
6.3 Fundamentals of Tensors in Index Notation . . . . . . . . . . . . . . . . . . . . 182
6.4 Various Products of Second Order Tensors . . . . . . . . . . . . . . . . . . . . . 190
6.5 Deformation Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6.6 The Moving Trihedron, Derivatives and Space Curves . . . . . . . . . . . . . . . 198
6.7 Tensors, Stresses and Cylindrical Coordinates . . . . . . . . . . . . . . . . . . . 210
A Formulary 227
A.1 Formulary Tensor Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
A.2 Formulary Tensor Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
B Nomenclature 237
References 239
Glossary English German 241
Glossary German English 257
Index 273
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
List of Figures
2.1 Triangle inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Hlder sum inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 Vector space R
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4 Afne vector space R
2
affine
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5 The scalar product in an 2-dimensional Euclidean vector space. . . . . . . . . . . 30
3.1 Matrix multiplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Matrix multiplication for a composition of matrices. . . . . . . . . . . . . . . . . 55
3.3 Orthogonal transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.1 Example of co- and contravariant base vectors in E
2
. . . . . . . . . . . . . . . . 81
4.2 Special case of a Cartesian basis. . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.3 Projection of a vector v on the dircetion of the vector u. . . . . . . . . . . . . . . 86
4.4 Resulting stress vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.5 Resulting stress vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.6 The polar decomposition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.7 An example of the physical components of a second order tensor. . . . . . . . . . 119
4.8 Principal axis problem with Cartesian coordinates. . . . . . . . . . . . . . . . . 120
5.1 The tangent vector in a point P on a space curve. . . . . . . . . . . . . . . . . . 136
5.2 The moving trihedron. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.3 The covariant base vectors of a curved surface. . . . . . . . . . . . . . . . . . . 138
5.4 Curvilinear coordinates in a Cartesian coordinate system. . . . . . . . . . . . . . 140
5.5 The natural basis of a curvilinear coordinate system. . . . . . . . . . . . . . . . . 141
5.6 The volume element dV with the surface dA. . . . . . . . . . . . . . . . . . . . 152
5.7 The Volume, the surface and the subvolumes of a body. . . . . . . . . . . . . . . 154
6.1 A simple statically determinate plane truss. . . . . . . . . . . . . . . . . . . . . 162
6.2 Free-body diagram for the node 2. . . . . . . . . . . . . . . . . . . . . . . . . . 162
6.3 Free-body diagrams for the nodes 1 and 3. . . . . . . . . . . . . . . . . . . . . . 163
6.4 A simple statically indeterminate plane truss. . . . . . . . . . . . . . . . . . . . 164
6.5 Free-body diagrams for the nodes 2 and 4. . . . . . . . . . . . . . . . . . . . . . 165
6.6 An arbitrary bar and its local coordinate system x, y. . . . . . . . . . . . . . . . 166
6.7 An arbitrary bar in a global coordinate system. . . . . . . . . . . . . . . . . . . . 167
IX
X List of Figures
6.8 The given structure of rigid bars. . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.9 The free-body diagrams of the subsystems left of node C, and right of node D
after the excursion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.10 The free-body diagram of the complete structure after the excursion. . . . . . . . 176
6.11 Matrix multiplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.12 Example of co- and contravariant base vectors in E
2
. . . . . . . . . . . . . . . . 184
6.13 The given spiral staircase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6.14 The winding up of the given spiral staircase. . . . . . . . . . . . . . . . . . . . . 199
6.15 An arbitrary line element with the forces, and moments in its sectional areas. . . 204
6.16 The free-body diagram of the loaded spiral staircase. . . . . . . . . . . . . . . . 207
6.17 The given cylindrical shell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
List of Tables
2.1 Compatibility of norms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
XI
XII List of Tables
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Chapter 1
Introduction
1
2 Chapter 1. Introduction
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Chapter 2
Basics on Linear Algebra
For example about vector spaces HALMOS [6], and ABRAHAM, MARSDEN, and RATIU [1].
And in german DE BOER [3], and STEIN ET AL. [13].
In german about linear algebra JNICH [8], FISCHER [4], FISCHER [9], and BEUTELSPACHER
[2].
3
4 Chapter 2. Basics on Linear Algebra
Chapter Table of Contents
2.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1 Denotations and Symbols of Sets . . . . . . . . . . . . . . . . . . . . 6
2.1.2 Subset, Superset, Union and Intersection . . . . . . . . . . . . . . . . 7
2.1.3 Examples of Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Denition of a Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Injective, Surjective and Bijective . . . . . . . . . . . . . . . . . . . . 8
2.2.3 Denition of an Operation . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.4 Examples of Operations . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.5 Counter-Examples of Operations . . . . . . . . . . . . . . . . . . . . . 9
2.3 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.1 Denition of a Field . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.2 Examples of Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.3 Counter-Examples of Fields . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.1 Denition of a Linear Space . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.2 Examples of Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.3 Linear Subspace and Linear Manifold . . . . . . . . . . . . . . . . . . 15
2.4.4 Linear Combination and Span of a Subspace . . . . . . . . . . . . . . 15
2.4.5 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.6 A Basis of a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5 Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5.1 Denition of a Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5.2 Examples of Metrices . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.3 Denition of a Metric Space . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.4 Examples of a Metric Space . . . . . . . . . . . . . . . . . . . . . . . 17
2.6 Normed Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6.1 Denition of a Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6.2 Denition of a Normed Space . . . . . . . . . . . . . . . . . . . . . . 18
2.6.3 Examples of Vector Norms and Normed Vector Spaces . . . . . . . . . 18
2.6.4 Hlder Sum Inequality and Cauchys Inequality . . . . . . . . . . . . . 20
2.6.5 Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Chapter Table of Contents 5
2.6.6 Compatibility of Vector and Matrix Norms . . . . . . . . . . . . . . . 22
2.6.7 Vector and Matrix Norms in Eigenvalue Problems . . . . . . . . . . . . 22
2.6.8 Linear Dependence and Independence . . . . . . . . . . . . . . . . . . 23
2.7 Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7.1 Denition of a Scalar Product . . . . . . . . . . . . . . . . . . . . . . 25
2.7.2 Examples of Scalar Products . . . . . . . . . . . . . . . . . . . . . . . 25
2.7.3 Denition of an Inner Product Space . . . . . . . . . . . . . . . . . . . 26
2.7.4 Examples of Inner Product Spaces . . . . . . . . . . . . . . . . . . . . 26
2.7.5 Unitary Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.8 Afne Vector Space and the Euclidean Vector Space . . . . . . . . . . . . . 28
2.8.1 Denition of an Afne Vector Space . . . . . . . . . . . . . . . . . . . 28
2.8.2 The Euclidean Vector Space . . . . . . . . . . . . . . . . . . . . . . . 29
2.8.3 Linear Independence, and a Basis of the Euclidean Vector Space . . . . 30
2.9 Linear Mappings and the Vector Space of Linear Mappings . . . . . . . . . 32
2.9.1 Denition of a Linear Mapping . . . . . . . . . . . . . . . . . . . . . 32
2.9.2 The Vector Space of Linear Mappings . . . . . . . . . . . . . . . . . . 32
2.9.3 The Basis of the Vector Space of Linear Mappings . . . . . . . . . . . 33
2.9.4 Denition of a Composition of Linear Mappings . . . . . . . . . . . . 34
2.9.5 The Attributes of a Linear Mapping . . . . . . . . . . . . . . . . . . . 34
2.9.6 The Representation of a Linear Mapping by a Matrix . . . . . . . . . . 35
2.9.7 The Isomorphism of Vector Spaces . . . . . . . . . . . . . . . . . . . 35
2.10 Linear Forms and Dual Vector Spaces . . . . . . . . . . . . . . . . . . . . . 36
2.10.1 Denition of Linear Forms and Dual Vector Spaces . . . . . . . . . . . 36
2.10.2 A Basis of the Dual Vector Space . . . . . . . . . . . . . . . . . . . . 36
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6 Chapter 2. Basics on Linear Algebra
2.1 Sets
2.1.1 Denotations and Symbols of Sets
A set M is a nite or innite collection of objects, so called elements, in which order has no
signicance, and multiplicity is generally also ignored. The set theory was originally founded by
Cantor
1
. In advance the meanings of some often used symbols and denotations are given below
. . .
m
1
M : m
1
is an element of the set M.
m
2
/ M : m
2
is not an element of the M.
. . . : The term(s) or element(s) included in this type of brackets describe a set.
. . . [ . . . : The terms on the left-hand side of the vertical bar are the elements of the
given set and the terms on the right-hand side of the bar describe the characteristics of the
elements include in this set.
: An "OR"-combination of two terms or elements.
: An "AND"-combination of two terms or elements.
: The following condition(s) should hold for all mentioned elements.
=This arrow means that the term on the left-hand side implies the term on the right-hand
side.
Sets could be given by . . .
an enumeration of its elements, e.g.
M
1
= 1, 2, 3, . (2.1.1)
The set M
1
consists of the elements 1, 2, 3
N = 1, 2, 3, . . . . (2.1.2)
The set N includes all integers larger or equal to one and it is also called the set of natural
numbers.
the description of the attributes of its elements, e.g.
M
2
= m [ (m M
1
) (m M
1
) , (2.1.3)
= 1, 2, 3, 1, 2, 3 .
The set M
2
includes all elements m with the attribute, that m is an element of the set M
1
, or
that m is an element of the set M
1
. And in this example these elements are just 1, 2, 3 and
1, 2, 3.
1
Georg Cantor (1845-1918)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.1. Sets 7
2.1.2 Subset, Superset, Union and Intersection
A set A is called a subset of B, if and only if
2
, every element of A, is also included in B
A B (a A a B) . (2.1.4)
The set B is called the superset of A
B A. (2.1.5)
The union C of two sets A and B is the set of all elements, that at least are an element of one of
the sets A and B
C = A B = c [ (c A) (c B) . (2.1.6)
The intersection C of two sets A and B is the set of all elements common to the sets A and B
C = A B = c [ (c A) (c B) . (2.1.7)
2.1.3 Examples of Sets
Example: The empty set. The empty set contains no elements and is denoted,
= . (2.1.8)
Example: The set of natural numbers. The set of natural numbers, or just the naturals, N,
sometimes also the whole numbers, is dened by
N = 1, 2, 3, . . . . (2.1.9)
Unfortunately, zero "0"is sometimes also included in the list of natural numbers, then the set N
is given by
N
0
= 0, 1, 2, 3, . . . . (2.1.10)
Example: The set of integers. The set of the integers Z is given by
Z = z [ (z = 0) (z N) (z N) . (2.1.11)
Example: The set of rational numbers. The set of rational numbers Q is described by
Q =
_
z
n
[ (z Z) (n N)
_
. (2.1.12)
Example: The set of real numbers. The set of real numbers is dened by
R = . . . . (2.1.13)
Example: The set of complex numbers. The set of complex numbers is given by
C =
_
+ i [ (, R)
_
i =

1
__
. (2.1.14)
2
The expression "if and only if" is often abbreviated with "iff".
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
8 Chapter 2. Basics on Linear Algebra
2.2 Mappings
2.2.1 Denition of a Mapping
Let A and B be sets. Then a mapping, or just a map, of A on B is a function f, that assigns every
a A one unique f(a) B,
f :
_
A B
a f(a)
. (2.2.1)
The set A is called the domain of the function f and the set B the range of the function f.
2.2.2 Injective, Surjective and Bijective
Let V and Wbe non empty sets. A mapping f between the two vector spaces V and Wassigns
to every x V a unique y W, which is also mentioned by f(x) and it is called the range of x
(under f). The set V is the domain and Wis the range also called the image set of f. The usual
notation of a mapping (represented by the three parts, the rule of assignment f, the domain V
and the range W) is given by
f : V W or f :
_
V W
x f (x)
. (2.2.2)
For every mapping f : V Wwith the subsets A V, and B Wthe following denitions
hold
f (A) := f (x) W: x A the range of A, and (2.2.3)
f
1
(B) := x V : f (x) B the range of B. (2.2.4)
With this the following identities hold
f is called surjective, if and only if f (V) = W, (2.2.5)
f is called injective, iff every f (x) = f (y) implies to x = y , and (2.2.6)
f is called bijective, iff f is surjective and injective. (2.2.7)
For every injective mapping f : V Wthere exists an inverse
f
1
:
_
f (V) V
f (x) x
, (2.2.8)
and the compositions of f and its inverse are dened by
f
1
f = id
V
; f f
1
= id
W
. (2.2.9)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.2. Mappings 9
The mappings id
V
: V V and id
W
: WWare the identity mappings in V, and W, i.e.
id
V
(x) = x x V ; id
W
(y) = y y W. (2.2.10)
Furthermore f must be surjective, in order to expand the existence of this mapping f
1
from
f(V) W to the whole set W. Then f : V W is bijective, if and only if the mapping
g : WV with g f = id
V
and f g = id
W
exists. In this case is g = f
1
the inverse.
2.2.3 Denition of an Operation
An operation or a combination, symbolized by , over a set M is a mapping, that maps two
arbitrary elements of Monto one element of M.
:
_
MMM
(m, n) m n
(2.2.11)
2.2.4 Examples of Operations
Example: The addition of natural numbers. The addition over the natural numbers N is an
operation, because for every m N and every n N the sum (m + n) N is again a natural
number.
Example: The subtraction of integers. The subtraction over the integers Z is an operation,
because for every a Z and every b Z the difference (a b) Z is again an integer.
Example: The addition of continuous functions. Let C
k
be the set of the k-times continuously
differentiable functions. The addition over C
k
is an operation, because for every function f(x)
C
k
and every function g(x) C
k
the sum (f + g) (x) = (f(x) + g(x)) is again a ktimes
continuously differentiable function.
2.2.5 Counter-Examples of Operations
Counter-Example: The subtraction of natural numbers. The subtraction over the natural
numbers N is not an operation, because there exist numbers a N and b N with a difference
(a b) , N. E.g. the difference 3 7 = 4 , N.
Counter-Example: The scalar multiplication of a n-tuple. The scalar multiplication of a n-
tuple of real numbers in R
n
with a scalar quantity a R is not an operation, because it does not
map two elements of R
n
onto another element of the same space, but one element of R and one
element of R
n
.
Counter-Example: The scalar product of two n-tuples. The scalar product of two n-tuples
in R
n
is not an operation, because it does not map an element of R
n
onto an element R
n
, but
onto an element of R.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
10 Chapter 2. Basics on Linear Algebra
2.3 Fields
2.3.1 Denition of a Field
A eld F is dened as a set with an operation addition a + b and an operation multiplication ab
for all a, b F. To every pair, a and b, of scalars there corresponds a scalar a +b, called the sum
c, in such a way that:
1 . Axiom of Fields. The addition is associative ,
a + (b + c) = (a + b) + c a, b, c F . (F1)
2 . Axiom of Fields. The addition is commutative ,
a + b = b + a a, b F . (F2)
3 . Axiom of Fields. There exists a unique scalar 0 F, called zero or the identity element with
respect to
3
the addition of the eld F, such that the additive identity is given by
a + 0 = a = 0 + a a F . (F3)
4 . Axiom of Fields. To every scalar a F there corresponds a unique scalar a, called the
inverse w.r.t. the addition or additive inverse, such that
a + (a) = 0 a F . (F4)
To every pair, a and b, of scalars there corresponds a scalar ab, called the product of a and b, in
such way that:
5 . Axiom of Fields. The multiplication is associative ,
a (bc) = (ab) c a, b, c F . (F5)
6 . Axiom of Fields. The multiplication is commutative ,
ab = ba a, b F . (F6)
7 . Axiom of Fields. There exists a unique non-zero scalar 1 F, called one or the identity
element w.r.t. the multiplication of the eld F, such that the scalar multiplication identity is given
by
a1 = a = 1a a F . (F7)
8 . Axiom of Fields. To every non-zero scalar a F there corresponds a unique scalar a
1
or
1
a
, called the inverse w.r.t. the multiplication or the multiplicative inverse, such that
a
_
a
1
_
= 1 = a
1
a
a F . (F8)
9 . Axiom of Fields. The muliplication is distributive w.r.t. the addition, such that the distribu-
tive law is given by
(a + b) c = ac + bc a, b, c F . (F9)
3
The expression "with respect to" is often abbreviated with "w.r.t.".
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.3. Fields 11
2.3.2 Examples of Fields
Example: The rational numbers. The set Q of the rational numbers together with the opera-
tions addition "+"and multiplication ""describe a eld.
Example: The real numbers. The set R of the real numbers together with the operations addi-
tion "+"and multiplication ""describe a eld.
Example: The complex numbers. The set C of the complex numbers together with the opera-
tions addition "+"and multiplication ""describe a eld.
2.3.3 Counter-Examples of Fields
Counter-Example: The natural numbers. The set N of the natural numbers together with the
operations addition "+"and multiplication ""do not describe a eld! One reason for this is that
there exists no inverse w.r.t. the addition in N.
Counter-Example: The integers. The set Z of the integers together with the operations addi-
tion "+"and multiplication ""do not describe a eld! For example there exists no inverse w.r.t.
the multiplication in Z, except for the elements 1 and 1.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
12 Chapter 2. Basics on Linear Algebra
2.4 Linear Spaces
2.4.1 Denition of a Linear Space
Let F be a eld. A linear space, vector space or linear vector space V over the eld F is a set,
with an addition dened by
+ :
_
V V V
(x, y) x +y
x, y V , (2.4.1)
a scalar multiplication given by
:
_
F V V
(, x) x
F ; x V , (2.4.2)
and satises the following axioms. The elements x, y etc. of the V are called vectors. To every
pair, x and y of vectors in the space V there corresponds a vector x +y, called the sum of x and
y, in such a way that:
1 . Axiom of Linear Spaces. The addition is associative ,
x + (y +z) = (x +y) +z x, y, z V . (S1)
2 . Axiom of Linear Spaces. The addition is commutative ,
x +y = y +x x, y V . (S2)
3 . Axiom of Linear Spaces. There exists a unique vector 0 V, called zero vector or the
origin of the space V, such that
x +0 = x = 0 +x x V . (S3)
4 . Axiom of Linear Spaces. To every vector x V there corresponds a unique vector x,
called the additive inverse, such that
x + (x) = 0 x V . (S4)
To every pair, and x, where is a scalar quantity and x a vector in V, there corresponds a
vector x, called the product of and x, in such way that:
5 . Axiom of Linear Spaces. The multiplication by scalar quantities is associative
(x) = () x , F ; x V . (S5)
6 . Axiom of Linear Spaces. There exists a unique non-zero scalar 1 F, called identity or the
identity element w.r.t. the scalar multiplication on the space V, such that the scalar multplicative
identity is given by
x1 = x = 1x x V . (S6)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.4. Linear Spaces 13
7 . Axiom of Linear Spaces. The scalar muliplication is distributive w.r.t. the vector addition,
such that the distributive law is given by
(x +y) = x + y F ; x, y V . (S7)
8 . Axiom of Linear Spaces. The muliplication by a vector is distributive w.r.t. the scalar
addition, such that the distributive law is given by
( + ) x = x + x , F ; x V . (S8)
Some simple conclusions are given by
0 x = 0 x V ; 0 F, (2.4.3)
(1) x = x x V ; 1 F, (2.4.4)
0 = 0 F, (2.4.5)
and if
x = 0 , then = 0 , or x = 0. (2.4.6)
2.4.1.0.1 Remarks:
Starting with the usual 3-dimensional vector space these axioms describe a generalized
denition of a vector space as a set of arbitrary elements x V. The classic example is
the usual 3-dimensional Euclidean vector space E
3
with the vectors x, y.
The denition says nothing about the character of the elements x V of the vector space.
The denition implies only the existence of an addition of two elements of the V and the
existence of a scalar multiplication, which both do not lead to results out of the vector
space V and that the axioms of vector space (S1)-(S8) hold.
The denition only implies that the vector space V is a non empty set, but nothing about
"how large"it is.
F = R, i.e. only vector spaces over the eld of real numbers R are examined, no look at
vector spaces over the eld of complex numbers C.
The dimension dimVof the vector space Vshould be nite, i.e. dimV = n for an arbitrary
n N, the set of natural number.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
14 Chapter 2. Basics on Linear Algebra
2.4.2 Examples of Linear Spaces
Example: The space of n-tuples. The space R
n
of the dimension n with the usual addition
x + y = [x
1
+ y
1
, . . . , x
n
+ y
n
] ,
and the usual scalar multiplication
x = [x
1
, . . . , x
n
] ,
is a linear space over the eld R, denoted by
R
n
=
_
x [ x = (x
1
, x
2
, . . . , x
n
)
T
, x
1
, x
2
, . . . , x
n
R
_
, (2.4.7)
and with the elements x given by
x =
_

_
x
1
x
2
.
.
.
x
n
_

_
; x
1
, x
2
, . . . , x
n
R.
Example: The space of n n-matrices. The space of square matrices R
nn
over the eld R
with the usual matrix addition and the usual multiplication of a matrix with a scalar quantity is a
linear space over the eld R, denoted by
A =
_

_
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mn
_

_
; a
ij
R , 1 i m, 1 j n , and i, j N.
(2.4.8)
Example: The eld. Every eld F with the deniton of an addition of scalar quantities in the
eld and a multiplication of the scalar quantities, i.e. a scalar product, in the eld is a linear
space over the eld itself.
Example: The space of continous functions. The space of continuous functions C (a, b) is
given by the open intervall (a, b) or the closed intervall [a, b] and the complex-valued function
f (x) dened in this intervall,
C (a, b) = f (x) [ f is complex-valued and continuous in [a, b] , (2.4.9)
with the addition and scalar multiplication given by
(f + g) = f (x) + g (x) ,
(f) = f (x) .
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.4. Linear Spaces 15
2.4.3 Linear Subspace and Linear Manifold
Let V be a linear space over the eld F. A subset W V is called a linear subspace or a linear
manifold of V, if the set is not empty, W,= , and the linear combination is again a vector of the
linear subspace,
ax + by W x, y W ; a, b F . (2.4.10)
2.4.4 Linear Combination and Span of a Subspace
Let V be a linear space over the eld F with the vectors x
1
, x
2
, . . . , x
m
V. Every vector v V
could be represented by a so called linear combination of the x
1
, x
2
, . . . , x
m
and some scalar
quantities a
1
, a
2
, . . . , a
m
F
v = a
1
x
1
+ a
2
x
2
+ . . . + a
m
x
m
. (2.4.11)
Furthermore let M= x
1
, x
2
, . . . , x
m
be a set of vectors. Than the set of all linear combinations
of the vectors x
1
, x
2
, . . . , x
m
is called the span span (M) of the subspace Mand is dened by
span (M) =
_
a
1
x
1
+ a
2
x
2
+ . . . + a
m
x
m
[ a
1
, a
2
, . . . a
m
F
_
. (2.4.12)
2.4.5 Linear Independence
Let V be a linear space over the eld F. The vectors x
1
, x
2
, . . . , x
n
V are called linearly
independent, if and only if
n

i=1
a
i
x
i
= 0 =a
1
= a
2
= . . . = a
n
= 0. (2.4.13)
In every other case the vectors are called linearly dependent.
2.4.6 A Basis of a Vector Space
A subset M = x
1
, x
2
, . . . , x
m
of a linear space or a vector space V over the eld F is called a
basis of the vector space V, if the vectors x
1
, x
2
, . . . , x
m
are linearly independent and the span
equals the vector space
span (M) = V . (2.4.14)
x =
n

i=1
v
i
e
i
, (2.4.15)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
16 Chapter 2. Basics on Linear Algebra
2.5 Metric Spaces
2.5.1 Denition of a Metric
A metric in a linear space V over the eld F is a mapping describing a "distance"between two
neighbouring points for a given set,
:
_
V V F
(x, y) (x, y)
. (2.5.1)
The metric satises the following relations for all vectors x, y, z V:
1 . Axiom of Metrices. The metric is positive,
(x, y) 0 x, y V . (M1)
2 . Axiom of Metrices. The metric is denite,
(x, y) = 0 x = y x, y V . (M2)
3 . Axiom of Metrices. The metric is symmetric,
(x, y) = (y, x) x, y V . (M3)
4 . Axiom of Metrices. The metric satises the triangle inequality,
(x, z) (x, y) + (y, z) x, y, z V . (M4)
x
z
y
(x, y) (y, z)
(x, z)
Figure 2.1: Triangle inequality.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.5. Metric Spaces 17
2.5.2 Examples of Metrices
Example: The distance in the Euclidean space. For two vectors x = (x
1
, x
2
)
T
and y =
(y
1
, y
2
)
T
in the 2-dimensional Euclidean space E
2
the distance between this two vectors, given
by
(x, y) =
_
(x
1
y
1
)
2
+ (x
2
y
2
)
2
(2.5.2)
is a metric.
Example: Discrete metric. The mapping, called the discrete metric,
(x, y) =
_
0, if x = y
1, else
, (2.5.3)
is a metric in every linear space.
Example: The metric.
(x, y) = x
T
Ay. (2.5.4)
Example: The metric tensor.
2.5.3 Denition of a Metric Space
A vector space V with a metric is called a metric space.
2.5.4 Examples of a Metric Space
Example: The eld. The eld of the complex numbers C is a metric space.
Example: The vector space. The vector space R
n
is a metric space, too.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
18 Chapter 2. Basics on Linear Algebra
2.6 Normed Spaces
2.6.1 Denition of a Norm
A norm || in a linear sapce V over the eld F is a mapping
|| :
_
V F
x |x|
. (2.6.1)
The norm satises the following relations for all vectors x, y, z V and every F:
1 . Axiom of Norms. The norm is positive,
|x| 0 x V . (N1)
2 . Axiom of Norms. The norm is denite,
|x| = 0 x = 0 x V . (N2)
3 . Axiom of Norms. The norm is homogeneous,
|x| = [[ |x| F ; x V . (N3)
4 . Axiom of Norms. The norm satises the triangle inequality,
|x +y| |x| +|y| x, y V . (N4)
Some simple conclusions are given by
|x| = |x| , (2.6.2)
|x| |y| |x y| . (2.6.3)
2.6.2 Denition of a Normed Space
A linear space V with a norm || is called a normed space.
2.6.3 Examples of Vector Norms and Normed Vector Spaces
The norm of a vector x is written like |x| and is called the vector norm. For a vector norm the
following conditions hold, see also (N1)-(N4),
|x| > 0 , with x ,= 0, (2.6.4)
with a scalar quantity ,
|x| = [[ |x| , and R , (2.6.5)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.6. Normed Spaces 19
and nally the triangle inequality,
|x +y| |x| +|y| . (2.6.6)
A vector norm is given in the most general case by
|x|
p
=
p

_
n

i=1
[x
i
[
p
. (2.6.7)
Example: The normed vector space. For the linear vector space R
n
, with the zero vector 0,
there exists a large variety of norms, e.g. the l-innity-norm, maximum-norm,
|x|

= max [x
i
[ , with 1 i n, (2.6.8)
the l1-norm,
|x|
1
=
n

i=1
[x
i
[ , (2.6.9)
the L1-norm,
|x| =
_

[x[ d, (2.6.10)
the l2-norm, Euclidian norm,
|x|
2
=

_
n

i=1
[x
i
[
2
, (2.6.11)
the L2-norm,
|x| =

_
_

[x[
2
d, (2.6.12)
and the p-norm,
|x| =
_
n

i=1
[x
i
[
p
_1
p
, with 1 p < . (2.6.13)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
20 Chapter 2. Basics on Linear Algebra
The maximum-norm is developed by determining the limit,
z := max [x
i
[ , with i = 1, . . . , n,
z
p

i=1
[x
i
[
p
nz
p
,
and nally the maximum-norm is dened by
z
_
n

i=1
[x
i
[
p
_1
p

nz. (2.6.14)
Example: Simple Example with Numbers. The varoius norms of a vector x differ in most
general cases. For example with the vector x
T
= [1, 3, 4]:
|x|
1
= 8,
|x|
2
=

26 5, 1,
|x|

= 4.
2.6.4 Hlder Sum Inequality and Cauchys Inequality
Let p and q be two scalar quantities, and the relationship between them is dened by
1
p
+
1
q
= 1 , with p > 1, q > 1. (2.6.15)
In the rst quadrant of a coordinate system the graph y = x
p1
and the straight lines x = , and
y = with > 0, and > 0 are displayed. The area enclosed by this two straight lines, the
curve and the axis of the coordinate system is at least the area of the rectangle given by ,


p
p
+

q
q
. (2.6.16)
For the real or complex quantities x
j
, and y
j
, which are not all equal to zero, the , and could
be described by
=
[x
j
[
_

j
[x
j
[
p
_1
p
, and =
[y
j
[
_

j
[y
j
[
q
_1
q
. (2.6.17)
Inserting the relations of equations (2.6.17) in (2.6.16), and summing the terms with the index j,
implies

j
[x
j
[ [y
j
[
_

j
[x
j
[
p
_1
p
_

j
[y
j
[
q
_1
q

j
[x
j
[
p
p
_

j
[x
j
[
p
_ +

j
[y
j
[
q
q
_

j
[y
j
[
q
_ = 1. (2.6.18)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.6. Normed Spaces 21
_
'
x
y

Figure 2.2: Hlder sum inequality.


The result is the so called Hlder sum inequality,

j
[x
j
y
j
[
_

j
[x
j
[
p
_1
p
_

j
[y
j
[
q
_1
q
. (2.6.19)
For the special case with p = q = 2 the Hlder sum inequality, see equation (2.6.19) transforms
into the Cauchys inequality,

j
[x
j
y
j
[
_

j
[x
j
[
2
_1
2
_

j
[y
j
[
2
_1
2
. (2.6.20)
2.6.5 Matrix Norms
In the same way like the vector norm the norm of a matrix A is introduced. This matrix norm
is written |A|. The characterictics of the matrix norm are given below, and start with the zero
matrix 0, and the condition A ,= 0,
|A| > 0, (2.6.21)
and with an arbitrary scalar quantity ,
|A| = [[ |A| , (2.6.22)
|A + B| |A| |B| , (2.6.23)
|A B| |A| |B| . (2.6.24)
In addition for the matrix norms and in opposite to vector norms the last axiom hold. If this
condition holds, then the norm is called to be multiplicative. Some usual norms, which satisfy
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
22 Chapter 2. Basics on Linear Algebra
the conditions (2.6.21)-(2.6.22) are given below. With n being the number of rows of the matrix
A, the absolute norm is given by
|A|
M
= M (A) = nmax [a
ik
[ . (2.6.25)
The maximum absolute row sum norm is given by
|A|
R
= R(A) = max
i
n

k=1
[a
ik
[ . (2.6.26)
The maximum absolute column sum norm is given by
|A|
C
= C (A) = max
k
n

i=1
[a
ik
[ . (2.6.27)
The Euclidean norm is given by
|A|
N
= N (A) =
_
_
tr A
T
A
_
. (2.6.28)
The spectral norm is given by
|A|
H
= H (A) =
_
largest eigenvalue of
_
A
T
A
_
. (2.6.29)
2.6.6 Compatibility of Vector and Matrix Norms
Denition 2.1. A matrix norm |A| is called to be compatible to an unique vector norm |x|, iff
for all matrices A and all vectors x the following inequality holds,
|A x| |A| |x| . (2.6.30)
The norm of the transformed vector y = A x should be separated by the matrix norm associated
to the vector norm from the vector norm |x| of the starting vector x. In table (2.1) the most
common vector norms are compared with their compatible matrix norms.
2.6.7 Vector and Matrix Norms in Eigenvalue Problems
The eigenvalue problem A x = x could be rewritten with the compatbility condition, |A x|
|A| |x|, like this
|A x| = [[ |x| |A| |x| . (2.6.31)
This equations implies immediately, that the matrix norm is an estimation of the eigenvalues.
Then with this condition a compatible matrix norm associated to a vector norm is most valuable,
if in the inequality |A x| |A| |x|, see also (2.6.31), both sides are equal. In this case there
can not exist a value of the left-hand side, which is less than the value of the right-hand side.
This upper limit is called the supremum and is written like sup (A).
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.6. Normed Spaces 23
Vector norms Compatible matrix norm Description
|x| = max [x
i
[ |A|
M
= M (A) absolute norm
|A|
R
= R(A) = sup (A) maximum absolute row sum norm
|x| =

[x
i
[ |A|
M
= M (A) absolute norm
|A|
C
= C (A) = sup (A) maximum absolute column sum norm
|x| =
_

[x
i
[
2
|A|
M
= M (A) absolute norm
|A|
N
= N (A) Euclidean norm
|A|
H
= H (A) = sup (A) spectral norm
Table 2.1: Compatibility of norms.
Denition 2.2. The supremumsup (x) of a matrix Aassociated to the vector norm|x| is dened
by the scalar quantity , in such a way that,
|Ax| |x| , (2.6.32)
for all vectors x,
sup (A) = min
x

i
, (2.6.33)
or
sup (A) = max
|A x|
|x|
. (2.6.34)
In table (2.1) above, all associated supremums are denoted.
2.6.8 Linear Dependence and Independence
The vectors a
1
, a
2
, . . . , a
i
, . . . , a
n
R
n
are called to be linearly independent, iff there exists
scalar quantites
1
,
2
, . . . ,
i
, . . . ,
n
R, which are not all equal to zero, such that
n

i=1

i
a
i
= 0. (2.6.35)
In every other case the vectors are called to be linearly dependent. For example three linearly
independent vectors are given by

1
_
_
1
0
0
_
_
+
2
_
_
0
1
0
_
_
+
3
_
_
0
0
1
_
_
,= 0 , with
i
,= 0. (2.6.36)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
24 Chapter 2. Basics on Linear Algebra
The n linearly independent vectors a
i
with i = 1, . . . , n span a n-dimensional vector space. This
set of n linearly independent vectors could be used as a basis of this vector space, in order to
describe another vector a
n+1
in this space,
a
n+1
=
n

k=1

k
a
k
, and a
n+1
R
n
. (2.6.37)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.7. Inner Product Spaces 25
2.7 Inner Product Spaces
2.7.1 Denition of a Scalar Product
Let V be a linear space over the eld of real numbers R. A scalar product
4
or inner product is a
mapping
, ) :
_
V V R
(x, y) x, y)
. (2.7.1)
The scalar product satises the following relations for all vectors x, y, z V and all scalar
quantities , R:
1 . Axiom of Inner Products. The scalar product is bilinear,
x + y, z) = x, z) + y, z) , R ; x, y V . (I1)
2 . Axiom of Inner Products. The scalar product is symmetric,
x, y) = y, x) x, y V . (I2)
3 . Axiom of Inner Products. The scalar product is positive denite,
x, x) 0 x V , and (I3)
x, x) = 0 x = 0 x V , (I4)
and for two varying vectors,
x, y) = 0
_

_
x = 0 , and an arbitrary vector y V ,
y = 0 , and an arbitrary vector x V ,
xy , i.e. the vectors x and y V are orthogonal.
(2.7.2)
Theorem 2.1. The inner product induces a norm and with this a metric, too. The scalar product
|x| = x, x)
1
2 denes a scalar-valued function, which satises the axioms of a norm!
2.7.2 Examples of Scalar Products
Example: The usual scalar product in R
2
. Let x = (x
1
, x
2
)
T
R
2
and y = (y
1
, y
2
)
T
R
2
be two vectors, then the mapping
x, y) = x
1
y
1
+ x
2
y
2
(2.7.3)
is called the usual scalar product.
4
It is important to notice, that the scalar product and the scalar multiplication are complete different mappings!
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
26 Chapter 2. Basics on Linear Algebra
2.7.3 Denition of an Inner Product Space
A vector space V with a scalar product , ) is called an inner product space or Euclidean vector
space
5
. The axioms (N1), (N2), and (N3) hold, too, only the axiom (N4) has to be proved. The
axiom (N4) also called the Schwarz inequality is given by
x, y) |x| |y| ,
this implies the triangel inequality,
|x +y|
2
= x +y, x +y) = x +y, x) +x +y, y) |x +y| |x| +|x +y| |y| ,
and nally results, the unitary space is a normed space,
|x +y| |x| +|y| .
And nally the relations between the different subspaces of a linear vector space are described
by the following scheme,
V
inner product space

V
normed space

V
metric space
,
where the arrow describes the necessary conditions, and the arrow describes the not nec-
essary, but possible conditions. Every true proposition in a metric space will be true in a normed
space or in an inner product space, too. And a true proposition in a normed space is also true in
an inner product space, but not necessary vice versa!
2.7.4 Examples of Inner Product Spaces
Example: The scalar product in a linear vector space. The 3-dimensional linear vector space
R
3
with the ususal scalar product denes a inner product by
u, v) = u v = = [u[ [v[ cos (u, v) , (2.7.4)
is an inner product space.
Example: The inner product in a linear vector space. The R
n
with an inner product and the
bilinear form
u, v) = u
T
Av, (2.7.5)
and with the quadratic form
u, u) = u
T
Au, (2.7.6)
and in the special case A = 1 with the scalar product
u, u) = u
T
u, (2.7.7)
is an inner product space.
5
In mathematic literature often the restriction is mentioned, that the Euclidean vector space should be of nite
dimension. Here no more attention is paid to this restriction, because in most cases nite dimensional spaces are
used.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.7. Inner Product Spaces 27
2.7.5 Unitary Space
A vector space V over the eld of real numbers R, with a scalar product , ) is called an inner
product space, and sometimes its complex analogue is called an unitary space over the eld of
complex numbers C.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
28 Chapter 2. Basics on Linear Algebra
2.8 Afne Vector Space and the Euclidean Vector Space
2.8.1 Denition of an Afne Vector Space
In matrix calculus an n-tuple a R
n
over the eld of real numbers R is studied, i.e.
a
i
R , and i = 1, . . . , n. (2.8.1)
One of this n-tuple, represented by a column matrix, or also called a column vector or just vector,
could describe an afne vector, if an point of origin in a geometric sense and a displacement of
origin are established. A set Wis called an afne vector space over the vector space V R
n
, if
_
'

R

P)
(

Q

P)

b =

QR
c =

PR
a =

PQ
V R
2
Figure 2.3: Vector space R
2
.
_
' `

P
Q
R
a

b
c
W R
2
afne
Figure 2.4: Afne vector space R
2
affine
.
a mapping given by
WWV , (2.8.2)
R
n
afne
R
n
, (2.8.3)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.8. Afne Vector Space and the Euclidean Vector Space 29
assigns to every pair of points P and Q W R
n
afne
a vector

PQ V. And the mapping also


satises the following conditions:
For every constant P the assignment

P
: Q

PQ, (2.8.4)
is a bijective mapping, i.e. the inverse
1
P
exists.
Every P, Q and R Wsatisfy

PQ +

QR =

PR, (2.8.5)
and

P
: WV , with
P
Q =

PQ , and Q W. (2.8.6)
For all P, Q and R W R
n
afne
the axioms of a linear space (S1)-(S4) for the addition hold
a +b = c a
i
+ b
i
= c
i
, with i = 1, . . . , n, (2.8.7)
and (S5)-(S8) for the scalar multiplication
a =

a a
i
=

a
i
. (2.8.8)
And a vector space is a normed space, like shown in section (2.6).
2.8.2 The Euclidean Vector Space
An Euclidean vector space E
n
is an unitary vector space or an inner prodcut space. In addition
to the normed spaces there is an inner product dened in an Euclidean vector space. The inner
product assigns to every pair of vectors u and v a scalar quantity ,
u, v) u v = v u = , with u, v E
n
, and R . (2.8.9)
For example in the 2-dimensional Euclidean vector space the angle between the vectors u and
v is given by
u v = [u[ [v[ cos , and cos =
u v
[u[ [v[
. (2.8.10)
The following identities hold:
Two normed space V and Wover the same eld are isomorphic, if and only if there exists
a linear mapping f fromVto W, such that the following inequality holds for two constants
m and M in every point x W,
m |x| |f (x)| M |x| . (2.8.11)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
30 Chapter 2. Basics on Linear Algebra

O
u
v

Figure 2.5: The scalar product in an 2-dimensional Euclidean vector space.


Every two real n-dimensional normed spaces are isomorphic. For example two subspaces
of the vector space R
n
.
Bellowin most cases the Euclidean normwith p = 2 is used to describe the relationships between
the elements of the afne (normed) vector space x R
n
afne
and the elements of the Euclidean
vector space v E
n
. With this condition the relations between a norm, like in section (2.6) and
an inner product is given by
|x|
2
= x x, (2.8.12)
and
|x| = |x|
2
=
_
x
2
i
. (2.8.13)
In this case it is possible to dene a bijective mapping between the n-dimensional afne vector
space and the Euclidean vector space. This bijectivity is called the topology a homeomorphism,
and the spaces are called to be homeomorphic. If two spaces are homeomorphic, then in both
spaces the same axioms hold.
2.8.3 Linear Independence, and a Basis of the Euclidean Vector Space
The conditions for the linear dependence and the linear independence of vectors v
i
in the n-
dimensional Euclidean vector space E
n
are given below. Furthermore a a vector basis of the
Euclidean vector space E
n
is introduced, and the representation of an arbitrary vector with this
basis is described.
The set of vectors v
1
, v
2
, . . . , v
n
is linearly dependent, if there exists a number of scalar
quantities a
1
, a
2
, . . . , a
n
, not all equal to zero, such that the following condition holds,
a
1
v
1
+ a
2
v
2
+ . . . + a
n
v
n
= 0. (2.8.14)
In every other case is the set of vectors v
1
, v
2
, . . . , v
n
called to be linearly independent.
The left-hand side is called the linear combination of the vectors v
1
, v
2
, . . . , v
n
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.8. Afne Vector Space and the Euclidean Vector Space 31
The set of all linear comnbinations of vectors v
1
, v
2
, . . . , v
n
span a subspace. The dimen-
sion of this subspace is equal to the number of vectors n, which span the largest linearly
independent space. The dimension of this subspace is at most n.
Every n + 1 vectors of the Euclidean vector space v E
n
with the dimension n must be
linearly dependent, i.e. the vector v = v
n+1
could be described by a linear combination of
the vectors v
1
, v
2
, . . . , v
n
,
v + a
1
v
1
+ a
2
v
2
+ . . . + a
n
v
n
= 0, (2.8.15)
v =
1

_
a
1
v
1
+ a
2
v
2
+ . . . + a
n
v
n
_
. (2.8.16)
The vectors z
i
given by
z
i
=
1

a
i
v
i
, with i = 1, . . . , n, (2.8.17)
are called the components of the vector v in the Euclidean vector space E
n
.
Every n linearly independent vectors v
i
of dimension n in the Euclidean vector space E
n
are called to be a basis of the Euclidean vector space E
n
. The vectors g
i
= v
i
are called
the base vectors of the Euclidean vector space E
n
,
v = v
1
g
1
+ v
2
g
2
+ . . . + v
n
g
n
=
n

i=1
v
i
g
i
, with v
i
=
a
i

. (2.8.18)
The v
i
g
i
are called to be the components and the v
i
are called to be the coordinates of the
vector v w.r.t. the basis g
i
. Sometimes the scalar quantities v
i
are called the components
of the vector v w.r.t. to the basis g
i
, too.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
32 Chapter 2. Basics on Linear Algebra
2.9 Linear Mappings and the Vector Space of Linear Map-
pings
2.9.1 Denition of a Linear Mapping
Let V and W be two vector spaces over the eld F. A mapping f : V W from elements of
the vector space V to the elements of the vector space Wis linear and called a linear mapping,
if for all x, y V and for all R the following axioms hold:
1 . Axiom of Linear Mappings (Additive w.r.t. the vector addition). The mapping f is
additive w.r.t. the vector addition,
f (x +y) = f (x) + f (y) x, y V . (L1)
2 . Axiom of Linear Mappings (Homogeneity of linear mappings). The mapping f is homo-
geneous w.r.t. scalar multiplication,
f (x) = f (x) F ; x V . (L2)
2.9.1.0.2 Remarks:
The linearity of the mapping f : V Wresults of being additive (L1), and homogeneous
(L2).
Because the action of the mapping f is only dened on elements of the vector space V, it
is necessary that, the sum vector x +y V (for every x, y V) and the scalar multiplied
vector x V (for every f R) are elements of the vector space V, too. And with this
postulation the set V must be a vector space!
With the same arguments for the ranges f (x), f (y), and f (x +y), also for the ranges
f (x), and f (x) in Wthe set Wmust be a vector space!
A linear mapping f : V Wis also called a linear transformation, a linear operator or a
homomorphism.
2.9.2 The Vector Space of Linear Mappings
In the section before the linear mappings f : V W, which sends elements of V to elements of
Wwere introduced. Because it is so nice to work with vector spaces, it is interesting to check,
if the linear mappings f : V Wform a vector space, too? In order to answer this question it
is necessary to check the denitions and axioms of a linear vector space (S1)-(S8). If they hold,
then the set of linear mappings is a vector space:
3 . Axiom of Linear Mappings (Denition of the addition of linear mappings). In the deni-
tion of a vector space the existence of an addition "+"is claimed, such that the sum of two linear
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.9. Linear Mappings and the Vector Space of Linear Mappings 33
mappings f
1
: V Wand f
2
: V Wshould be a linear mapping (f
1
+ f
2
) : V W, too.
For an arbitrary vector x V the pointwise addition is given by
(f
1
+ f
2
) (x) := f
1
(x) + f
2
(x) x V , (L3)
for all linear mappings f
1
, f
2
from V to W. The sum f
1
+f
2
is linear, because both mappings f
1
and f
2
are linear, i.e. (f
1
+ f
2
) is a linear mapping, too.
4 . Axiom of Linear Mappings (Denition of the scalar multiplication of linear mappings).
Furthermore a product of a scalar quantity inR and a linear mapping f : V Wis dened
by
(f) (x) := f (x) R ; x V . (L4)
If the mapping f is linear, then results immediatly, that the mapping (f) is linear, too.
5 . Axiom of Linear Mappings (Satisfaction of the axioms of a linear vector space). The
denitions (L3) and (L4) satisfy all linear vector space axioms given by (S1)-(S8). This is easy
to prove by computing the equations (S1)-(S8). If V and Ware two vector spaces over the eld
F, then the set L of all linear mappings f : V Wfrom V to W,
L(V, W) is a linear vector space. (L5)
The identity element w.r.t the addition of a vector space L(V, W) is the null mapping 0, which
sends every element from V to the zero vector 0 W.
2.9.3 The Basis of the Vector Space of Linear Mappings
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
34 Chapter 2. Basics on Linear Algebra
2.9.4 Denition of a Composition of Linear Mappings
Till now only an addition of linear mappings and a multiplication with a scalar quantity are
dened. The next step is to dene a "multiplication"of two linear mappings, this combination of
two functions to form a new single function is called a composition. Let f
1
: V Wbe a linear
mapping and furthermore let f
2
: X Y be linear, too. If the image set Wof the linear mapping
f
1
also the domain of the linear mapping f
2
, i.e. W = X, then the composition f
1
f
2
: V Y
is dened by
(f
1
f
2
) (x) = f
1
(f
2
(x)) x V . (2.9.1)
Because of the linearity of the mappings f
1
and f
2
the composition f
1
f
2
is also linear.
2.9.4.0.3 Remarks:
The composition f
1
f
2
is also written as f
1
f
2
and it is sometimes called the product of f
1
and f
2
.
If this products exist (, i.e. the domains and image sets of the linear mappings match like
in the denition), then the following identities hold:
f
1
(f
2
f
3
) = (f
1
f
2
) f
3
(2.9.2)
f
1
(f
2
+ f
3
) = f
1
f
2
+ f
1
f
3
(2.9.3)
(f
1
f
2
) f
3
= f
1
f
3
+ f
2
f
3
(2.9.4)
(f
1
f
2
) = (f
1
f
2
) = f
1
(f
2
) (2.9.5)
If all sets are equal V = W= X = Y, then this products exist, i.e. all the linear mappings
map the vector space V onto itself
f L(V, V) =: L(V) . (2.9.6)
In this case with f
1
L(V, V), and f
2
L(V, V) the composition f
1
f
2
L(V, V) is a
linear mapping from the vector space V to itself, too.
2.9.5 The Attributes of a Linear Mapping
Let V and Wbe vector spaces over the F and L(V, W) the vector space of all linear map-
pings f : V W. Because L(V, W) is a vector space, the addition and the multiplication
with a scalar for all elements of L, i.e. all linear mappings f : V W, is again a linear
mapping from V to W.
An arbitrary composition of linear mappings, if it exists, is again a linear mapping from
one vector space to another vector space. If the mappings f : V Wform a space in itself
exist, then every composition of this mappings exist and is again linear, i.e. the mapping
is again an element of L(V, V).
The existence of an inverse, i.e. a reverse linear mapping from W to V, and denoted by
f
1
: WV, is discussed in the following section.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
2.9. Linear Mappings and the Vector Space of Linear Mappings 35
2.9.6 The Representation of a Linear Mapping by a Matrix
Let x and y be two arbitrary elements of the linear vector space V given by
x =
n

i=1
x
i
e
i
, and y =
n

i=1
y
i
e
i
. (2.9.7)
Let L be a linear mapping from V in itself
L =
ij

ij
. (2.9.8)
y = L(x) , (2.9.9)
y
i
e
i
=
_

kl

kl
_ _
x
j
e
j
_
=
kl

kl
_
x
j
e
j
_
=
kl
x
j

kl
(e
j
)
y = . (2.9.10)
2.9.7 The Isomorphism of Vector Spaces
The term "bijectivity"and the attributes of a bijective linear mapping f : V W imply the
following dention. A bijective linear mapping f : V Wis also called an isomorphism of the
vector spaces V and W). The spaces V and Ware said to be isomorphic.
n-tuple
x = x
i
e
i

_
_
_
x
1
.
.
.
x
n
_
_
_ x =
_

_
x
1
.
.
.
x
n
_

_, (2.9.11)
with x V dimV = n , the space of all n-tuples, x R
n
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
36 Chapter 2. Basics on Linear Algebra
2.10 Linear Forms and Dual Vector Spaces
2.10.1 Denition of Linear Forms and Dual Vector Spaces
Let W R
n
be the vector space of column vectors x. In this vector space the scalar product
, ) is dened in the usual way, i.e.
, ) : R
n
R
n
R and x, y) =
n

i=1
x
i
y
i
. (2.10.1)
The relations between the continuous linear functionals f : R
n
R and the scalar products , )
dened in the R
n
are given by the Riesz representation theorem, i.e.
Theorem 2.2 (Riesz representation theorem). Every continuous linear functional f : R
n
R
could be represented by
f (x) = x, u) x R
n
, (2.10.2)
and the vector u is uniquely dened by f (x).
2.10.2 A Basis of the Dual Vector Space
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Chapter 3
Matrix Calculus
For example GILBERT [5], and KRAUS [10]. And in german STEIN ET AL. [13], and ZURMHL
[14].
37
38 Chapter 3. Matrix Calculus
Chapter Table of Contents
3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.1.1 Rectangular Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.1.2 Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.1.3 Column Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.1.4 Row Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.1.5 Diagonal Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1.6 Identity Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1.7 Transpose of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1.8 Symmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1.9 Antisymmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 Some Basic Identities of Matrix Calculus . . . . . . . . . . . . . . . . . . . 42
3.2.1 Addition of Same Order Matrices . . . . . . . . . . . . . . . . . . . . 42
3.2.2 Multiplication by a Scalar Quantity . . . . . . . . . . . . . . . . . . . 42
3.2.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2.4 The Trace of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.5 Symmetric and Antisymmetric Square Matrices . . . . . . . . . . . . . 44
3.2.6 Transpose of a Matrix Product . . . . . . . . . . . . . . . . . . . . . . 44
3.2.7 Multiplication with the Identity Matrix . . . . . . . . . . . . . . . . . 45
3.2.8 Multiplication with a Diagonal Matrix . . . . . . . . . . . . . . . . . . 45
3.2.9 Exchanging Columns and Rows of a Matrix . . . . . . . . . . . . . . . 46
3.2.10 Volumetric and Deviator Part of a Matrix . . . . . . . . . . . . . . . . 46
3.3 Inverse of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3.1 Denition of the Inverse . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3.2 Important Identities of Determinants . . . . . . . . . . . . . . . . . . . 48
3.3.3 Derivation of the Elements of the Inverse of a Matrix . . . . . . . . . . 49
3.3.4 Computing the Elements of the Inverse with Determinants . . . . . . . 50
3.3.5 Inversions of Matrix Products . . . . . . . . . . . . . . . . . . . . . . 52
3.4 Linear Mappings of an Afne Vector Spaces . . . . . . . . . . . . . . . . . 54
3.4.1 Matrix Multiplication as a Linear Mapping of Vectors . . . . . . . . . 54
3.4.2 Similarity Transformation of Vectors . . . . . . . . . . . . . . . . . . 55
3.4.3 Characteristics of the Similarity Transformation . . . . . . . . . . . . . 55
3.4.4 Congruence Transformation of Vectors . . . . . . . . . . . . . . . . . 56
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Chapter Table of Contents 39
3.4.5 Characteristics of the Congruence Transformation . . . . . . . . . . . 57
3.4.6 Orthogonal Transformation . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.7 The Gauss Transformation . . . . . . . . . . . . . . . . . . . . . . . . 59
3.5 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.5.1 Representations and Characteristics . . . . . . . . . . . . . . . . . . . 62
3.5.2 Congruence Transformation of a Matrix . . . . . . . . . . . . . . . . . 62
3.5.3 Derivatives of a Quadratic Form . . . . . . . . . . . . . . . . . . . . . 63
3.6 Matrix Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.6.1 The Special Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . 65
3.6.2 Rayleigh Quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.6.3 The General Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . 69
3.6.4 Similarity Transformation . . . . . . . . . . . . . . . . . . . . . . . . 69
3.6.5 Transformation into a Diagonal Matrix . . . . . . . . . . . . . . . . . 70
3.6.6 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . . . 71
3.6.7 Proof of the Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . 71
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
40 Chapter 3. Matrix Calculus
3.1 Denitions
A matrix is an array of mn numbers
A = [A
ik
] =
_

_
A
11
A
12
A
1n
A
21
A
22
A
2n
.
.
.
.
.
.
.
.
.
.
.
.
A
m1
A
mn
_

_
. (3.1.1)
The index i is the row index and k is the column index. This matrix is called a m n-matrix.
The order of a matrix is given by the number of rows and columns.
3.1.1 Rectangular Matrix
Something like in equation (3.1.1) is called a rectangular matrix.
3.1.2 Square Matrix
A matrix is said to be square, if the number of rows equals the number of columns. It is a
n n-matrix
A = [A
ik
] =
_

_
A
11
A
12
A
1n
A
21
A
22
A
2n
.
.
.
.
.
.
.
.
.
.
.
.
A
n1
A
nn
_

_
. (3.1.2)
3.1.3 Column Matrix
A m1-matrix is called a column matrix or a column vector a given by
a =
_

_
a
1
a
2
.
.
.
a
m
_

_
=
_
a
1
a
2
a
m

T
. (3.1.3)
3.1.4 Row Matrix
A 1 n-matrix is called a row matrix or a row vector a given by
a =
_
a
1
a
2
a
n

. (3.1.4)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.1. Denitions 41
3.1.5 Diagonal Matrix
The elements of a diagonal matrix are all zero except the ones, where the column index equals
the row index,
D = [D
ik
] , and D
ik
= 0 , iff i ,= k. (3.1.5)
Sometimes a diagonal matrix is written like this, because there are only elements on the main
diagonal of the matrix
D = D
11
D
mm
|. (3.1.6)
3.1.6 Identity Matrix
The identity matrix is a diagonal matrix given by
1 =
_

_
1 0 0
0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1
_

_
=
_
1
ik
= 0 , iff i ,= k
1
ik
= 1 , iff i = k
. (3.1.7)
3.1.7 Transpose of a Matrix
The matrix transpose is the matrix obtained by exchanging the columns and rows of the matrix
A = [a
ik
] , and A
T
= [a
ki
] . (3.1.8)
3.1.8 Symmetric Matrix
A square matrix is called to be symmetric, if the following equation is satised
A
T
= A. (3.1.9)
It is a kind of reection at the main diagonal.
3.1.9 Antisymmetric Matrix
A square matrix is called to be antisymmetric, if the following equation is satised
A
T
= A. (3.1.10)
For the elements of an antisymmetric matrix the following conditions hold
a
ik
= a
ki
. (3.1.11)
For that reason a antisymmetric matrix must have zeros on its diagonal.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
42 Chapter 3. Matrix Calculus
3.2 Some Basic Identities of Matrix Calculus
3.2.1 Addition of Same Order Matrices
The matrices A, B and C are commutative under matrix addition
A + B = B + A = C. (3.2.1)
And the same connection given in components notation
A
ik
+ B
ik
= C
ik
. (3.2.2)
The matrices A, B and C are associative under matrix addition
(A + B) + C = A + (B + C) . (3.2.3)
And there exists an identity element w.r.t. matrix addition 0, called the additive identity, and
dened by A + 0 = A. Furthermore there exists an inverse element w.r.t. matrix addition A,
called the additive inverse, and dened by A + X = 0 X = A.
3.2.2 Multiplication by a Scalar Quantity
The scalar multiplication of matrices is given by
A = A =
_

_
A
11
A
12
A
1n
A
21
A
22
A
2n
.
.
.
.
.
.
.
.
.
.
.
.
A
m1
A
mn
_

_
; R. (3.2.4)
3.2.3 Matrix Multiplication
The product of two matrices A and B is dened by the matrix multiplication
A
(lm)
B
(mn)
= C
(ln)
(3.2.5)
C
ik
=
m

=1
A
i
B
k
. (3.2.6)
It is important to notice the condition, that the number of columns of the rst matrix equals the
number of rows of the second matrix, see index m in equation (3.2.5). Matrix multiplication is
associative
(A B) C = A(B C) , (3.2.7)
and also matrix multiplication is distributive
(A + B) C = A C + B C. (3.2.8)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.2. Some Basic Identities of Matrix Calculus 43
A
(lm)
C
(ln)
B
(mn)

c
ij
Figure 3.1: Matrix multiplication.
But in general matrix multiplication is in general not commutative
A B ,= B A. (3.2.9)
There is an exception, the so called commutative matrices, which are diagonal matrices of the
same order.
3.2.4 The Trace of a Matrix
The trace of a matrix is dened as the sum of the diagonal elements,
tr A = tr [A
ik
]
(mn)
=
n

i=1
A
ii
. (3.2.10)
It is possible to split the trace of a sum of matrices
tr (A + B) = tr A + tr B. (3.2.11)
Computing the trace of a matrix product is commutative,
tr (A B) = tr (B A) , (3.2.12)
but still the matrix multiplication in general is not commutative, see equation (3.2.9),
A B ,= B A. (3.2.13)
The trace of an identity matrix of dimension n is dened by,
tr 1
(nn)
= n. (3.2.14)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
44 Chapter 3. Matrix Calculus
3.2.5 Symmetric and Antisymmetric Square Matrices
Every matrix M could be described as a sum of a symmetric part S and an antisymmetric part A
M
(nn)
= S
(nn)
+ A
(nn)
. (3.2.15)
The symmetric part is dened like in equation (3.1.9),
S = S
T
, i.e. S
ik
= S
ki
. (3.2.16)
The antisymmetric part is dened like in equation (3.1.10),
A = A
T
, i.e. A
ik
= A
ki
, and A
ii
= 0. (3.2.17)
For example an antisymmetric matrix looks like this
A =
_
_
0 1 5
1 0 2
5 2 0.
_
_
The symmetric and antisymmetric part of a square matrix are dened by
M =
1
2
_
M + M
T
_
+
1
2
_
M M
T
_
= S + A. (3.2.18)
The transpose of the symmetric and the antisymmetric part of a square matrix are given by,
S
T
=
1
2
_
M + M
T
_
T
=
1
2
_
M
T
+ M
_
= S, and (3.2.19)
A
T
=
1
2
_
M M
T
_
T
=
1
2
_
M
T
M
_
= A. (3.2.20)
3.2.6 Transpose of a Matrix Product
The transpose of a matrix product of two matrices is dened by
(A B)
T
= B
T
A
T
, and (3.2.21)

_
A
T
B
T
_
T
= B
_
A
T
_
T
= B A, (3.2.22)
for more than two matrices
(A B C) = C
T
B
T
C
T
, etc. (3.2.23)
The proof starts with the l n-matrix C, which is given by the two matrices A and B
C
(ln)
= A
(lm)
B
(mn)
; C
ik
=
m

=1
A
i
B
k
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.2. Some Basic Identities of Matrix Calculus 45
The transpose of the matrix C is given by
C
T
= [C
ki
] , and C
ki
=
m

=1
A
k
B
i
=
m

=1
B
i
A
k
,
and nally in symbol notation
C
T
= (A B)
T
= B
T
A
T
.
3.2.7 Multiplication with the Identity Matrix
The identity matrix is the multiplicative identity w.r.t. the matrix multiplication
A 1 = 1 A = A. (3.2.24)
3.2.8 Multiplication with a Diagonal Matrix
A diagonal matrix D is given by
D = [D
ik
] =
_

_
D
11
0 0
0 D
22
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 D
nn
_

_
(nn)
. (3.2.25)
Because the matrix multiplication is non-commutative, there exists two possibilities two compute
the product of two matrices. The rst possibility is the multiplication with the diagonal matrix
from the left-hand side, this is called the pre-multiplication
D A =
_

_
D
11
a
1
D
22
a
2
.
.
.
D
nn
a
n
_

_
; A =
_

_
a
1
a
2
.
.
.
a
n
_

_
. (3.2.26)
Each row of the matrix A, described by a so called row vector a
i
or a row matrix
a
i
=
_
a
i1
a
i2
a
in

, (3.2.27)
is multiplied with the matching diagonal element D
ii
. The result is the matrix D A in equation
(3.2.26). The second possibility is the multiplication with the diagonal matrix from the right-
hand side, this is called the post-multiplication
A D =
_
a
1
D
11
a
2
D
22
a
n
D
nn

; A =
_
a
1
, a
2
, . . . , a
n

. (3.2.28)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
46 Chapter 3. Matrix Calculus
Each column of the matrix A, described by a so called column vector a
i
or a column matrix
a
i
=
_

_
a
i1
a
i2
.
.
.
a
in
_

_
, (3.2.29)
is multiplied with the matching diagonal element D
ii
. The result is the matrix A D in equation
(3.2.28).
3.2.9 Exchanging Columns and Rows of a Matrix
Exchanging the i-th and the j-th row of the matrix A is realized by the pre-multiplication with
the matrix T
T
(nn)
A
(nn)
=

A
(nn)
(3.2.30)
_

_
1
i
.
.
.
j
.
.
.
i 0 1
.
.
. 1
.
.
.
.
.
. 1
.
.
.
j 1 0
.
.
.
.
.
. 1
_

_
_

_
a
11
a
12
a
1n
a
i
.
.
.
.
.
.
a
j
a
n
_

_
=
_

_
a
1
a
j
.
.
.
.
.
.
a
i
a
n
_

_
.
The matrix T is same as its inverse T = T
1
. And with another matrix

T the i-th and the j-th
row are exchanged, too.

T =
_

_
1
i
.
.
.
j
.
.
.
i 0 1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
j 1 0
.
.
.
.
.
. 1
_

_
,

T =
_

T
T
_
1
Furthermore the old j-th row is multplied by 1. Finally post-multiplication with such a matrix

T exchanges the columns i and j of a matrix.


3.2.10 Volumetric and Deviator Part of a Matrix
It is possible to split up every symmetric matrix S in a diagonal matrix (volumetric matrix) V
and in an antisymmetric matrix (deviator matrix) D
S
(nn)
= V
(nn)
+ D
(nn)
. (3.2.31)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.2. Some Basic Identities of Matrix Calculus 47
The ball part is given by
V
ii
=
1
n
n

i=1
S
ii
=
1
n
tr S , or V =
_
1
n
tr S
_
. (3.2.32)
The deviator part is the difference between the matrix S and the volumetric part
R
ii
= S
ii
V
ii
, R = S
_
1
n
tr S
_
, (3.2.33)
the non-diagonal elements of the deviator are the elements of the former matrix S
R
ik
= S
ik
, i ,= k ,and R = R
T
. (3.2.34)
The diagonal elements of the volumetric part are all equal
V = [V
ik
] =
_

_
V
V
.
.
.
V
_

_
. (3.2.35)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
48 Chapter 3. Matrix Calculus
3.3 Inverse of a Square Matrix
3.3.1 Denition of the Inverse
A linear equation system is given by
A
(nn)
x
(n1)
= y
(n1)
; A = [A
ik
] . (3.3.1)
The inversion of this system of equations introduces the inverse of a matrix A
1
.
x = A
1
y ; A
1
:= X = [X
ik
] . (3.3.2)
The pre-multiplication of A x = y with the inverse A
1
implies
A
1
A x = A
1
y x = A
1
y , and A
1
A = 1. (3.3.3)
Finally the inverse of a matrix is dened by the following relations between a matrix A and its
inverse A
1
_
A
1
_
1
= A, (3.3.4)
A
1
A = A A
1
, (3.3.5)
[A
ik
] [X
ki
] = 1. (3.3.6)
The solution of the linear equation system could only exist, if and only if the inverse A
1
exists.
The inverse A
1
of a square matrix Aexists, if the matrix is nonsingular (invertible),
i.e. det A ,= 0; or the difference between the rank and the number of columns resp.
rows d = n r of the matrix A must be equal to zero, i.e. the rank r of the matrix
A
(nn)
must be equal to the number n of columns or rows (r = n). The rank of a
rectangular matrix A
(nn)
is dened by the largest number of linearly independent
rows (number of rows m) or columns (number of columns n). The smaller value of
m and n is the characteristic value of the rank.
3.3.2 Important Identities of Determinants
3.3.2.0.4 1. The determinant stays the same, if a row (or a column) is added to another row
(or column).
3.3.2.0.5 2. The determinante equals zero, if the expanded row (or column) is exchanged by
another row (or column). In this case two rows (or columns) are the same, i.e. these rows (or
columns) are linearly dependent.
3.3.2.0.6 3. This is the generaliziation of the rst and second rule. The determinant equals
zero, if the rows (or columns) of the matrix are linearly dependent. In this case it is possible to
produce a row (or column) with all elements equal to zero, and if the determinant is expanded
about this row (or column) the determinant itself equals zero.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.3. Inverse of a Square Matrix 49
3.3.2.0.7 4. By exchanging two rows (or columns) the sign of the determinant changes.
3.3.2.0.8 5. Multiplication with a scalar quantity is dened by
det
_
A
(nn)
_
=
n
det A
(nn)
; R. (3.3.7)
3.3.2.0.9 6. The determinant of a product of two matrices is given by
det (A B) = det (B A) = det Adet B. (3.3.8)
3.3.3 Derivation of the Elements of the Inverse of a Matrix
The n column vectors (n-tuples) a
k
(k = 1, . . . , n) of the matrix A and a
k
R
n
are linearly
independent, i.e. the sum

n
=1

,= 0 is for all

equal to zero
A
(nn)
=
_
a
1
a
2
a
k
a
n

; a
k
=
_

_
A
1k
A
2k
.
.
.
A
nk
_

_
. (3.3.9)
The a
k
span a n-dimensional vector space. Than every other vector, the (n+1)-th vector a
n+1
=
r R
n
, could be described by an unique linear combination of the former vectors a
k
, i.e. the
vector r R
n
is linearly dependent of the n vectors a
k
R
n
. For that reason the linear equation
system
A
(nn)
x
(n1)
= r
(n1)
; r ,= 0 ; r R
n
; x R
n
(3.3.10)
has an unique solution
A
1
:= X , A X = 1. (3.3.11)
To compute the inverse X from the equation A X = 1 it is necessary to solve n-times the linear
equation system with the unit vector 1
j
(j = 1, . . . , n) on the right-hand side. Then the j-th
equation system is given by
_
a
1
a
2
a
k
a
n

_
X
1j
X
2j
.
.
.X
kj
.
.
.
X
nj
_

_
=
_

_
0
0
.
.
.
1
.
.
.
0
_

_
,
A X
j
= 1
j
, (3.3.12)
with the inverse represented by its column vectors
X = A
1
=
_
X
1
X
2
X
j
X
n

(3.3.13)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
50 Chapter 3. Matrix Calculus
and the identity matrix also represented by its column vectors
1 =
_
1
1
1
2
1
j
1
n

, (3.3.14)
and nally the identity matrix itself
1 =
_

_
1 0 0
0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1
_

_
. (3.3.15)
The solutions represented by the vectors X
j
could be computed with the determinants.
3.3.4 Computing the Elements of the Inverse with Determinants
The determinant det A
(nn)
of a square matrix A
ik
R is a real number, dened by Leibnitz like
this
det A
(nn)
=

(1)
I
A
1j
A
2k
A
3l
A
nn
. (3.3.16)
The indices j, k, l, . . . , n are rearranged in all permutations of the numbers 1, 2, . . . , n and I is the
total number of inversions. The determinant det A is established as the sum of all (n!) elements.
In every case there exists the same number of positive and negative terms. For example the
determinant det A of an 3 3-matrix is computed
A
(33)
= A =
_
_
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
_
_
. (3.3.17)
An even permutation of the numbers 1, 2, . . . , 3 is a sequence like this,
1 2 3 , or 2 3 1 , or 3 1 2, (3.3.18)
and an odd permutation is a sequence like this,
3 2 1 , or 2 1 3 , or 1 3 2. (3.3.19)
For this example with n = 3 equation (3.3.16) becomes
det A =A
11
A
22
A
33
+ A
12
A
23
A
31
+ A
13
A
21
A
32
A
31
A
22
A
13
A
32
A
23
A
11
A
33
A
21
A
12
=A
11
(A
22
A
33
A
32
A
23
)
A
12
(A
21
A
33
A
31
A
23
)
+ A
13
(A
21
A
32
A
31
A
22
) . (3.3.20)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.3. Inverse of a Square Matrix 51
This result is the same like the result by using the determinant expansion by minors. In this
example the determinant is expanded about its rst row. In general the determinant is expanded
about the i-th row like this
det A
(nn)
=
n

j=1
A
ij
det
_

A
ij
_
(1)
i+j
=
n

j=1
A
ij

A
ij
. (3.3.21)
_

A
ij
_
is a matrix, created by eliminating the i-th row and the j-th column of A. The factor

A
ij
is the so called cofactor of the element A
ij
. For this factor again the determinant expansion is
used.
Example: Simple 3 3-matrix. The matrix A
A =
_
_
1 4 0
2 1 1
1 0 2,
_
_
is expanded about the rst row
det A =1 (1)
1+1
det
_
1 1
0 2
_
+ 4 (1)
1+2
det
_
2 1
1 2
_
+ 0 (1)
1+3
det
_
2 1
1 0
_
,
and nally the result is
=1 1 2 4 1 5 + 0 1 1 = 18.
In order to compute the inverse X of a matrix the determinant det A is calculated by expanding
the i-th row of the matrix A. The matrix A
(nn)
is assumpted to be linearly independent, i.e.
det A ,= 0. Equation (3.3.21) implies
n

j=1
A
ij

A
ij
= det A = 1 det A. (3.3.22)
The second rule about determinants implies that exchanging the expanded row i by the row k
leads to an linearly dependent matrix,
n

j=1
A
kj

A
ij
= 0 = 0 det A if i ,= k, (3.3.23)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
52 Chapter 3. Matrix Calculus
or
n

j=1
A
ij

A
kj
= 0 = 0 A if i ,= k. (3.3.24)
The denition of the Kronecker delta
ik
is given by

ik
=
_
1,iff i = k
0,iff i ,= k
; [
ik
] =
_

_
1 0 0
0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1
_

_
= 1. (3.3.25)
Equation (3.3.22) is rewritten with the dention of the Kronecker delta,
n

j=1
A
ij

A
kj
=
ik
det A. (3.3.26)
The elements x
jk
of the inverse X = A
1
are dened by
n

j=1
A
ij
X
jk
=
ik
; A X = 1, (3.3.27)
and comparing (3.3.26) with (3.3.27) implies
X
jk
=

A
kj
det A
; [X
jk
] = A
1
. (3.3.28)
If the matrix is symmetric, i.e. A = A
T
, the equations (3.3.26) and (3.3.27) imply
X
jk
=

A
jk
det A
; [X
jk
] = A
1
, (3.3.29)
and nally
A
1
=
_
A
1
_
T
. (3.3.30)
3.3.5 Inversions of Matrix Products
3.3.5.0.10 1. The inverse of a matrix prodcut is given by
(A B)
1
= B
1
A
1
. (3.3.31)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.3. Inverse of a Square Matrix 53
Proof. Start with the assumption
(A B)
1
(A B) = 1,
using equation (3.3.31) this implies
B
1
A
1
A B = 1,
and nally
B
1
1 B = 1.
3.3.5.0.11 2. The inverse of the triple matrix product is given by
(A B C)
1
= C
1
B
1
A
1
. (3.3.32)
3.3.5.0.12 3. The order of inversion and transposition could be exchanged,
_
A
1
_
T
=
_
A
T
_
1
. (3.3.33)
Proof. The inverse is dened by
A A
1
= 1 =
_
A A
1
_
T
=
_
A
1
_
T
A
T
,
and this nally implies
_
A
T
_
1
A
T
= 1
_
A
T
_
1
=
_
A
1
_
T
.
3.3.5.0.13 4. If the matrix A is symmetric, then the inverse matrix A
1
is symmetric, too,
A = A
T
A
1
=
_
A
1
_
T
. (3.3.34)
3.3.5.0.14 5. For the diagonal matrix D the following relations hold,
det D =
n

i=1
D
ii
, (3.3.35)
D
1
=
_
1
D
ii
_
. (3.3.36)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
54 Chapter 3. Matrix Calculus
3.4 Linear Mappings of an Afne Vector Spaces
3.4.1 Matrix Multiplication as a Linear Mapping of Vectors
The linear mapping is dened by
y = A x with x R
m
, y R
l
, and A R
lm
, (3.4.1)
and its components are
y
i
=
n

j=1
A
ij
x
j

_

_
y
1
= A
11
x
1
+ . . . + A
1j
x
j
+ . . . + A
1m
x
m
.
.
.
y
i
= A
i1
x
1
+ . . . + A
ij
x
j
+ . . . + A
im
x
m
.
.
.
y
l
= A
l1
x
1
+ . . . + A
lj
x
j
+ . . . + A
lm
x
m
. (3.4.2)
This linear function describes a mapping of the m-tuple (vector) x onto the l-tuple (vector) y
with a matrix A. Furthermore the vector x R
m
is described by a linear mapping with a matrix
B and a vector z R
n
x = B z , with x R
m
, z R
n
, and B R
mn
, (3.4.3)
with the components
x
j
=
n

k=1
B
ik
z
k
. (3.4.4)
Inserting equation (3.4.4) in equation (3.4.2) implies
y
i
=
m

j=1
(A
ij
x
j
) =
m

j=1
_
A
ij
n

k=1
B
jk
z
k
_
=
n

k=1
_
m

j=1
A
ij
B
jk
_
z
k
=
n

k=1
C
ik
z
k
. (3.4.5)
With this relation the matrix multiplication is dened by
A B = C, (3.4.6)
with the components C
ik
given by
m

j=1
A
ij
B
jk
= C
ik
. (3.4.7)
The matrix multiplication is the combination or the composition of two linear mappings
y = A x
x = B z
_
y = A(B z) = (A B) z = C z. (3.4.8)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.4. Linear Mappings of an Afne Vector Spaces 55
1

l
1 . . . . . . . . . m
A
C
1
.
.
.
.
.
.
m
1 . . . k . . . n
B
C
ik
Figure 3.2: Matrix multiplication for a composition of matrices.
3.4.2 Similarity Transformation of Vectors
For a square matrix A a linear mapping is dened by
y = A x , with x, y R
n
, and A R
nn
. (3.4.9)
The two vectors x and y are described by the same linear mapping and the same nonsingular
square matrix T and the vectors x and y. The vectors are called to be similar, because they are
transformed in the same way
x = T x , with x, x R
n
, T R
nn
, and det T ,= 0. (3.4.10)
y = T y , with y, y R
n
, T R
nn
, and det T ,= 0. (3.4.11)
Inserting this relations in equation (3.4.9) implies
T
1
[ T y = A T x, (3.4.12)
y = T
1
A T x, (3.4.13)
and nally
A = T
1
A T. (3.4.14)
The matrix A = T
1
A T is the result of the so called similarity transformation of the matrix
A with the nonsingular transformation matrix T. The matrices A and A are said to be similar
matrices.
3.4.3 Characteristics of the Similarity Transformation
Similar matrices A and A have some typical characteristics . . .
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
56 Chapter 3. Matrix Calculus
the determinants are equal
det A = det
_
T
1
A T
_
,
= det T
1
det Adet T , and det T
1
=
1
det T
,
det A = det A. (3.4.15)
the traces are equal
tr (A B) = tr (B A) , (3.4.16)
tr A = tr
_
T
1
A T
_
,
= tr
_
A T T
1
_
,
tr A = tr A. (3.4.17)
the same eigenvalues.
the same characteristic polynomial.
3.4.4 Congruence Transformation of Vectors
Let y = A x a be linear mapping
y = A x , with x, y R
n
, and A R
nn
. (3.4.18)
with a square matrix A. The vectors x and y are computed in an opposite way (kontragredient)
with the nonsingular square matrix T and the vectors x and y
x = T x , with x, x R
n
, T R
nn
, and det T ,= 0. (3.4.19)
The y is the result of the mutliplication of the transpose of the matrix T and the vector y
y = T
T
y , with y, y R
n
, T R
nn
, and det T ,= 0. (3.4.20)
Inserting equation (3.4.19) in equation (3.4.18) implies
T
T
[ y = A T x,
and comparing this with equation (3.4.20) implies
T
T
y = T
T
A T x,
and nally
y = A x, (3.4.21)
A = T
T
A T. (3.4.22)
The matrix product A = T
T
A T is called the congruence transformation of the matrix A. The
matrices A and A are called to be congruent matrices.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.4. Linear Mappings of an Afne Vector Spaces 57
3.4.5 Characteristics of the Congruence Transformation
Congruent matrices A and A have some typical characteristics . . .
the congruence transformation keeps the matrix A symmetric
condition: A = A
T
, (3.4.23)
assumption: A = A
T
, (3.4.24)
proof: A
T
= (T
T
A T)
T
= T
T
A
T
T = T
T
A T = A. (3.4.25)
the product P = x
T
y = x
T
A x is an invariant scalar quantity
assumption: P = x
T
y =

P = x
T
y, (3.4.26)
proof: x = T x
_
x = T
1
x
y = T
T
y
, with det T ,= 0. (3.4.27)
x
T
y =
_
T
1
x
_
T
T
T
y
= x
T
_
T
1
_
T
T
T
y
= x
T
_
T
T
_
1
T
T
y
= x
T
y
The scalarproduct P = x
T
y = x
T
A x is also called the quadratic form of the vector x.
The quantity P could describe a mechnical work, if the elements of the vector x describe
a displacement and the components of the vector y describe the assigned forces of a static
system. The invariance of this work under a congruent transformations is important for
numerical mechanics, e.g. for the nite element method.
3.4.6 Orthogonal Transformation
Let the square matrix A describe a linear mapping
y = A x , with x, y R
n
, and A R
nn
. (3.4.28)
The vectors x and y will be transformed in the similar way and in the congruent way with the so
called orthogonal matrix T = Q, det Q ,= 0.
x = Q x , y = Q y y = Q
1
y similar transformation, (3.4.29)
y = Q
T
y congruent transformation. (3.4.30)
For the orthogonal transformation the transformations matrices are called to be orthogonal, if
they fulll the relations
Q
1
= Q
T
or Q Q
T
= 1. (3.4.31)
For the orthogonal matrices the following identities hold.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
58 Chapter 3. Matrix Calculus
If a matrix is orthogonal, its inverse equals the transpose of the matrix.
The determinant of an orthogonal matrix has the value +1 or 1,
det Q = 1. (3.4.32)
The product of orthogonal matrices is again orthogonal.
An Orthogonal matrix A with det A = +1 is called a rotation matrix.
_
y
1
-axis
'
y
2
-axis

y
1
-axis

y
2
-axis
`

y
1
sin
y
2
cos


y
2
sin
y
1
cos
,

,

y
1
cos
y
2
sin
`

y
1
sin
y
2
cos
y
1
y
2
y
1
y
2

Figure 3.3: Orthogonal transformation.


The most important usage of this rotation matrices is the rotation transformation of coordinates.
For example the rotation transformation in R
2
is given by
y = Q y, (3.4.33)
_
y
1
y
2
_
=
_
cos sin
sin cos
_ _
y
1
y
2
_
y = Q y, (3.4.34)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.4. Linear Mappings of an Afne Vector Spaces 59
and
y = Q
T
y, (3.4.35)
_
y
1
y
2
_
=
_
cos sin
sin cos
_ _
y
1
y
2
_
y = Q
T
y. (3.4.36)
The inversion of equation (3.4.33) with the aid of determinants implies
y = Q
1
y ; Q
1
:= x ; X
ik
=

Q
ki
det Q
. (3.4.37)
Solving this equations step by step, starting with computing the determinant,
det Q = cos
2
+ sin
2
= 1, (3.4.38)
the general form of the equation to compute the elements of the inverse of the matrix,

Q
ki
= (1)
k+i
det

Q
ki
, (3.4.39)
the different elements
X
11
=
Q
22
1
= cos , (3.4.40)
X
12
= (1)
3
Q
12
= (1)
3
(sin ) = +sin , (3.4.41)
X
21
= (1)
3
Q
21
= (1)
3
(sin ) = sin , (3.4.42)
X
22
= Q
11
= cos , (3.4.43)
and nally
X = Q
1
=
_
cos sin
sin cos
_
. (3.4.44)
Comparing this result with equation (3.4.35) leads to
Q
1
= Q
T
. (3.4.45)
3.4.7 The Gauss Transformation
Let A
(mn)
be a real valued matrix, A
ik
R. If m > n, then the matrix A is nonsingular w.r.t.
the columns, i.e. the column vectors are linearly independent. The Gauss transformation is
dened by
B = A
T
A , with B R
nn
, A
T
R
nm
, and A R
mn
. (3.4.46)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
60 Chapter 3. Matrix Calculus
The matrix B is symmetric, i.e.
B = B
T
, (3.4.47)
because
B
T
= (A
T
A)
T
= A
T
A = B. (3.4.48)
If the rows and columns A are nonsingular, then the matrix B is nonsingular, i.e. the determinant
is not equal to zero,
det B ,= 0. (3.4.49)
This product was introduced by Gauss in order to compute the so called normal equation. The
matrix A is given by
A =
_

_
a
1
a
2
a
n
_

_
a
i
= i-th column vector of A, (3.4.50)
and the matrix B is computed by
B = A
T
A, (3.4.51)
=
_

_
a
T
1
a
T
2
.
.
.
a
T
n
_

_
_

_
a
1
a
2
a
n
_

_
,
=
_

_
a
T
1
a
1
a
T
1
a
2
a
T
1
a
n
a
T
2
a
1
a
T
2
a
2

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
T
n
a
1
a
T
n
a
n
_

_
nn
. (3.4.52)
An element B
ik
of the product matrix is the scalar product of the i-th column vector with the k-th
column vector of A,
B
ik
= a
T
i
a
k
. (3.4.53)
The diagonal elements are called the quadratic value of the norm of the column vectors and this
value is always positive (a
i
,= 0). The sum, i.e. the trace of the product A
T
A or the sum of all
A
2
ik
, is the quadratic valued of a matrix norm, called the Euklidian matrix norm N(A),
N (A) =
_
tr(A
T
A) =

i,k
A
2
ik
. (3.4.54)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.4. Linear Mappings of an Afne Vector Spaces 61
For example
A =
_
_
2 1
0 3
3 2
_
_
; A
T
A =
_
13 4
4 14
_
,
N (A) =
_
2
2
+ 1
2
+ 3
2
+ (3)
2
+ 2
2
= 3

3 =

i,k
A
2
ik
,
=

13 + 14 = 3

3 =
_
tr
_
A
T
A
_
.
The matrix B = A
T
A is positive denite.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
62 Chapter 3. Matrix Calculus
3.5 Quadratic Forms
3.5.1 Representations and Characteristics
Let y = A x be a linear mapping
y = A x , with x R
n
, y R
n
, and A
ik
R, (3.5.1)
with the nonsingular, square and symmetric matrix A, i.e.
det A ,= 0 , and A = A
T
. (3.5.2)
the product
:= x
T
y = x
T
A x. (3.5.3)
is a real number, R, and is called the quadratic form of A. The following conditions hold
=
T
, scalar quantities are invariant w.r.t. to transposition, and (3.5.4)
= x
T
A x =
T
= x
T
A
T
x , because A = A
T
, (3.5.5)
i.e. the matrix A must be symmetric. The scalar quantity and than the matrix A, too, are called
to be positive denite (or negative denite), if the following conditions hold,
= x
T
A x
_

_
>
(<)
0 , for every x ,= 0
= 0 , iff x = 0
. (3.5.6)
It is necessary, that the determinant does not equal zero det A ,= 0, i.e. the matrix A must be
nonsingular. If there exists a vector x ,= 0, such that = 0, then the form = x
T
A x is called
semidenite. In this case if the matrix A is singular, i.e. det A = 0, then the homogenous system
of equations,
A x = 0
_
x
T
A x = 0 , iff x ,= 0 , and det A = 0
_
, (3.5.7)
or resp.
a
1
x
1
+ a
2
x
2
+ . . . + a
n
x
n
= 0, (3.5.8)
has got only nontrivial solutions, because of the linear dependence of the columns of the matrix
A. The condition x
T
A x = 0 could only hold, iff the vector is nonequal to zero, x ,= 0, and the
determinant of the matrix A equals zero, det A = 0.
3.5.2 Congruence Transformation of a Matrix
Let = x
T
A x be a quadratic form, given by
= x
T
A x , and A
T
= A. (3.5.9)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.5. Quadratic Forms 63
The vector x is treated by the nonsingular transformation T
x = T y, (3.5.10)
= y
T
T
T
A T y,
= y
T
B y. (3.5.11)
The matrix A transforms like,
B = T
T
A T, (3.5.12)
where T is a real nonsingular matrix. Than the matrices B and A are called to be congruent to
each other,
A
c
B. (3.5.13)
The congruence transformation bewares the symmetry of the matrix A, because the following
equation holds,
B = B
T
. (3.5.14)
3.5.3 Derivatives of a Quadratic Form
The quadratic form
= x
T
A x , and A
T
= A. (3.5.15)
should be partial derived w.r.t. the components of the vector x. The result forms the column
matrix

x
,
x
x
i
=
_

_
0
0
.
.
.
1
.
.
.
0
_

_
= e
i
, i-th unit vector, (3.5.16)
and
x
T
x
i
=
_
0 0 1 0

= e
T
i
. (3.5.17)
With equations (3.5.16) and (3.5.17) the derivative of the quadratic form is given by,

x
i
= e
T
i
A x + x
T
A e
i
. (3.5.18)
With the symmetry of the matrix A
A = A
T
, (3.5.19)
the second part of equation (3.5.18) is rewritten as,
_
x
T
A e
i
_
T
= e
T
i
A
T
x = e
T
i
A x, (3.5.20)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
64 Chapter 3. Matrix Calculus
and nally

x
i
= 2 e
T
i
A x. (3.5.21)
The quantity e
T
i
A x is the i-th component of the vector A x. Furthermore the n derivatives

xi
are combined as a column matrix

x
=
_

x1

x2
.
.
.

xn
_

_
= 2
_

_
1 0 0
0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
0 1
_

_
A x
= 21 A x

x
= 2A x. (3.5.22)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.6. Matrix Eigenvalue Problem 65
3.6 Matrix Eigenvalue Problem
3.6.1 The Special Eigenvalue Problem
For a given linear mapping
y
(n1)
= A
(nn)
x
(n1)
, with x, y R
n
, x
i
, y
i
, A
ik
F , and det A ,= 0, (3.6.1)
nd the vectors x
0
in the direction of the vectors y
0
,
y
0
= x
0
, and F. (3.6.2)
The directions associated to the eigenvector x
0
is called a principal axis. The whole task is
described as the principal axes problem of the matrix A. The scalar quantity is called the
eigenvalue, because of this denition the whole problem is also called the eigenvalue problem.
The equation (3.6.2) could be rewritten like this
y
0
= 1 x
0
, (3.6.3)
and inserting this in equation (3.6.1) implies
y
0
= A x
0
= 1 x
0
, (3.6.4)
and nally the special eigenvalue problem,
(A 1) x
0
= 0. (3.6.5)
The so called special eigenvalue problem is characterized by the eigenvalues distributed only
on the main diagonal. For the homogeneous linear equation system of the x
0
exists a trivial
solution x
0
= 0. A nontrivial solution exists only if this condition is fullled,
det (A 1) = 0. (3.6.6)
This equation is called the characteristic equation, and the left-hand side det (A 1) is called
the characteristic polynomial. The components of the vector x
0
are yet unknown. The vector
x
0
could be computed by determing the norm, because the principal axes are searched. Solving
the determinant implies for a matrix with n rows a polynomial of n-th degree. The roots or
sometimes also called the zeros of this equation or polynomial are the eigenvalues.
p () = det (1 A) =
n
+ a
n1

n1
+ . . . + a
1
+ a
0
. (3.6.7)
The rst and the last coefcient of the polynomial are given by
a
n1
= tr A, and (3.6.8)
a
0
(1)
n
= det A. (3.6.9)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
66 Chapter 3. Matrix Calculus
With the polynomial factorization the equation p () or the polynomial (3.6.7) could be described
by
p () = (
1
) (
2
) . . . (
n
) . (3.6.10)
Comparing this with Newtons relation for a symmetric polynomial the equations (3.6.8) and
(3.6.9) could be rewritten like this,
tr A =
1
+
2
+ . . . +
n
, and (3.6.11)
det A =
1

2
. . .
n
. (3.6.12)
Because the associated eigenvectors to each eigenvalue could not be computed explict, the whole
equation is normed
y
0i
= A x
i0
=
i
x
0i
, with i = 1, 2, 3, . . . , n, (3.6.13)
and the eigenvectors x
0i
. If for example the matrix (A 1) has the reduction of rank d = 1 and
the vector x
(1)
0i
is an arbitrary solution of the eigenvalue problem, then the equation,
x
0i
= cx
(1)
0i
, (3.6.14)
with the parameter c represents the general solution of the eigenvalue problem. If the reduction
of rank of the matrix is larger than 1, then there exist d > 1 linearly independent eigenvectors
x
1
. As a rule of thumb,
eigenvectors of different eigenvalues are linearly independent.
For a symmetric matrix A the following identities hold.
3.6.1.0.15 1st Rule. The eigenvectors x
0i
of a nonsingular and symmetric matrix A are or-
thogonal to each other.
3.6.1.0.16 Proof. Let the vectors x
0i
= x
i
, x
1
and x
2
be eigenvectors
(A
1
1) x
1
= 0, (3.6.15)
muliplied with the vector x
2
from the left-hand side,
x
T
2
(A
1
1) x
1
= 0, (3.6.16)
and also
x
T
1
(A
2
1) x
2
= 0, (3.6.17)
and nally equation (3.6.16) subtracted from equation (3.6.17),
x
T
2
A x
1
+
1
x
T
2
x
1
+ x
T
1
A x
2
+
2
x
T
1
x
2
= 0. (3.6.18)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.6. Matrix Eigenvalue Problem 67
With A being symmetric
= x
T
1
A x
2
=
_
x
T
1
A x
2
_
T
= x
T
2
A x
1
, and A = A
T
, (3.6.19)
results
(
1

2
) x
T
1
x
2
= 0, (3.6.20)
and because (
1

2
) ,= 0, and the scalar product x
T
1
x
2
= 0 must equal zero, this means that
the vectors are orthogonal
x
1
x
2
, iff
1
,=
2
. (3.6.21)
Furthermore hold the
3.6.1.0.17 2nd Rule. A real, nonsingular and symmetric square matrix with n rows has exact
n real eigenvalues
i
, being the roots of its characterstic equation.
3.6.1.0.18 Proof. Let the eigenvalues be complex numbers, given by

1
= + i , and
2
= i, (3.6.22)
than the eigenvectors are given by
x
1
= b + ic , and x
2
= b ic. (3.6.23)
Inserting this relations in the above orthogonality condition,
(
1

2
) x
T
1
x
2
= 0, (3.6.24)
implies
(
1

2
)
_
b
T
+ ic
T
_ _
b
T
ic
T
_
= 0, (3.6.25)
and nally
2i
_
b
T
b + c
T
c
_
= 0. (3.6.26)
This equation implies = 0, because the term
_
b
T
b + c
T
c
_
,= 0 is nonzero, i.e. the eigenvalues
are real numbers.
3.6.2 Rayleigh Quotient
The largest eigenvalue
1
of a symmetric matrix could be estimated with the Rayleigh quotient.
The special eigenvalue problem y = A x = x, or
(A 1) x = 0 , with A = A
T
, det A ,= 0 , A
ij
R, (3.6.27)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
68 Chapter 3. Matrix Calculus
has the n eigenvalues, in order of magnitude,
[
1
[ [
2
[ [
3
[ . . .
n
, with R. (3.6.28)
For large matrices A the setting-up and the solution of the characteristic equation is very compli-
cated. Furthermore for some problems it is sufcient to know only the largest and/or the smallest
eigenvalue, e.g. for a stability problem only the smallest eigenvalue is of interest, because this is
the critical load. Therefore the so called direct method to compute the approximated eignevalues

1
by von Mises is interesting. For using this method it is necessary to compute the inverse be-
fore starting with the actual method to determine the smallest, critical load case. This so called
von Mises iteration is given by
z

= A z
1
= A

z
0
. (3.6.29)
In this iterative process the vector z

converges to x
1
, i.e. the vector converges to the eigen-
value
1
with the largest absolute value. The starting vector z
0
is represented by the linearly
independent eigenvectors x
i
,
z
0
= C
1
x
1
+ C
2
x
2
+ . . . + C
n
x
n
,= 0, (3.6.30)
and an arbitrary vector is given by
z

1
C
1
x
1
+

2
C
2
x
2
+ . . . +

n
C
n
x
n
,= 0. (3.6.31)
If the condition [
1
[ [
2
[ [
3
[ . . .
n
holds, then with the raising value of the vector
z
u
converges to the eigenvector x
1
multiplied with a constant c
1
,
z

1
c
1
x
1
, (3.6.32)
z
+1

1
z

. (3.6.33)
A component is given by,
q
()
i
=
z
()
i
z
(1)
i

1
. (3.6.34)
The convergence will be better, if the ratio [
1
[ / [
2
[ increases. A very good approximated value

1
for the dominant (largest) eigenvalue
1
is established with the so called Rayleigh quotient,

1
= R[z

] =
z
T

z
+1
z
T

=
z
T

A z

z
T

, with
1

1
. (3.6.35)
The numerator and the denominator of the Rayleigh quotient include scalar products of the ap-
proximated vectors. For this reason the information of all components q
()
i
are used in this
approximation.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.6. Matrix Eigenvalue Problem 69
3.6.3 The General Eigenvalue Problem
The general eigenvalue problem is dened by
A x = B x, (3.6.36)
(A B) x = 0, (3.6.37)
with the matrices A and B being nonsingular. The eigenvalues are multiplied with an arbitrary
matrix B and not with the identity matrix 1. This problem is reduced to the special eigenvalue
problem by multiplication with the inverse of matrix B form the left-hand side,
_
B
1
A 1
_
x = 0, (3.6.38)
_
B
1
A 1
_
x = 0. (3.6.39)
Even if the matrices A and B are symmetric, the matrix C = B
1
A is in general a nonsymmetric
matrix, because the matrix multiplication is noncommutative.
3.6.4 Similarity Transformation
In the special eigenvalue problem
A x = y = x = 1 x, (3.6.40)
the vectors are transformed like in a similar transformation,
x = T x , and y = T y , with y = 1 x. (3.6.41)
The transformation matrix T is nonsingular, i.e. det T ,= 0, and T
ik
R. This implies
A T x = T x,
T
1
A T x = x = 0,
_
T
1
A T 1
_
x = 0, and (3.6.42)
_

A 1
_
x = 0. (3.6.43)
The determinant of the inverse of matrix T is given by
det
_
T
1
_
=
1
det T
, (3.6.44)
and the determinant of the product is split in the product of determinants,
det
_
T
1
A T
_
= det
_
T
1
_
det Adet T, (3.6.45)
det

A = det A.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
70 Chapter 3. Matrix Calculus
3.6.4.0.19 Rule. The eigenvalues of the matrix A do not change if the matrix is transformed
into the similar matrix

A,
det
_
T
1
A T 1
_
= det
_

A 1
_
= det (A 1) = 0. (3.6.46)
3.6.5 Transformation into a Diagonal Matrix
The nonsingular symmetric matrix A with n rows contains n linearly independent eigenvec-
tors x
i
, if and only if for any multiple eigenvalue

(, i.e. multiple roots of the characteristic


polynomial) of multiplicity p

the reduction of rank d

= p

(, with = 1, 2, . . . , s) for the


characteristic matrix (A1) equals the multiplicity of the multiple eigenvalue. The quantity s
describes the number of different eigenvalues. The n linearly independent normed eigenvectors
x
i
of the matrix A are combined as column vectors to form the nonsingular eigenvector matrix,
X = [x
1
, x
2
, . . . , x
n
] , with det X ,= 0. (3.6.47)
The equation of eigenvalues given by
A x
i
=
i
x
i
, (3.6.48)
A X = [A x
1
, A x
2
, . . . , A x
n
] = [
1
x
1
,
2
x
2
, . . . ,
n
x
n
] , (3.6.49)
[
1
x
1
, . . . ,
n
] = [x
1
, x
2
, . . . , x
n
]
_

1
0 0
0
2
.
.
.
.
.
.
.
.
.
.
.
.
0
n
_

_
, (3.6.50)
[
1
x
1
, . . . ,
n
] = X . (3.6.51)
Combining the results implies
A X = X , (3.6.52)
and nally
X
1
A X = X , with det X ,= 0. (3.6.53)
Therefore the diagonal matrix of eigenvalues could be computed by the similarity transforma-
tion of the matrix A with the eigenvector matrix X. In the opposite direction a transformation
matrix T must fulll some conditions, in order to transform a matrix A by a similarity transfor-
mation into a diagonal matrix, i.e.
T
1
A T = D = D
i
i|, (3.6.54)
or
A T = T D , with T = [t
1
, . . . , t
n
] , (3.6.55)
and nally
A t
i
= D
ii
t
i
. (3.6.56)
The column vectors t
i
of the transformation matrix T are the n linearly independent eigenvectors
of the matrix A with the associated eigenvalues
i
= D
ii
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.6. Matrix Eigenvalue Problem 71
3.6.6 Cayley-Hamilton Theorem
The Cayley-Hamilton Theorem says, that an arbitrary square matrix A satises its own charac-
teristic equation. If the characteristic polynomial for the matrix A is
p () = det (1 A) , (3.6.57)
=
n
+ a
n1

n1
+ . . . + a
1
+ a
0
, (3.6.58)
then the matrix A solves the Cayley-Hamilton equation
p (A) = A + a
n1
A
n1
+ . . . + a
1
A + a
0
1 = 0, (3.6.59)
and the matrix A to the power n is given by
A
n
= A A. . . A. (3.6.60)
The matrix with the exponent n, written like A
n
, could be described by a linear combination
of the matrices with the exponents n 1 up to 0, resp. A
n1
till A
0
= 1. If the matrix A is
nonsingular, then also negative quantities as exponents are allowed, e.g.
A
1
= p (A) = 0, (3.6.61)
_

1
a
0
_
_
A
n1
+ a
n1
A
n2
+ . . . + a
1
_
= 0 , a
0
,= 0. (3.6.62)
Furthermore the power series P(A) of a matrix A, with the eigenvalues

appearing

-times
in the minimal polynomial, converges, if and only if the usual power series converges for all
eigenvalues

of the matrix A. For example


_
e
A

= 1 + A +
1
2!
A
2
+
1
3!
A
3
+ . . . , (3.6.63)
[cos (A)] = 1
1
2!
A
2
+
1
4!
A
4
+. . . , (3.6.64)
[sin (A)] = A
1
3!
A
3
+
1
5!
A
5
+. . . . (3.6.65)
3.6.7 Proof of the Cayley-Hamilton Theorem
A vector z R
n
is represented by a combination of the linearly independent eigenvectors x
i
of
the matrix A similar to a diagonal matrix with n rows,
z = c
1
x
1
+ c
2
x
2
+ . . . + c
n
x
n
, (3.6.66)
with the c
i
called the evaluation coefcients. Introducing some basic vectors and matrices, in
order to establish the evaluation theorem,
X = [x
1
, x
2
, . . . , x
n
] , (3.6.67)
c = [c
1
, c
2
, . . . , c
n
]
T
, (3.6.68)
X c = z, and (3.6.69)
c = X
1
z. (3.6.70)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
72 Chapter 3. Matrix Calculus
Let z
0
be an arbitrary real vector to start with, and establish some iterated vectors
z
1
= A z
0
, (3.6.71)
z
2
= A z
1
= A
2
z
0
, (3.6.72)
.
.
.
z
n
= A z
n1
= A
n
z
0
. (3.6.73)
The n + 1 vectors z
0
till z
n
are linearly dependent, because every n + 1 vectors in R
n
must be
linearly dependent. The characteristic polynomial of the matrix A is given by
p () = det (1 A) , (3.6.74)
= a
0
+ a
1
+ . . . + a
n1

n1
+
n
. (3.6.75)
The relation between the starting vector z
0
and the rst n iterated vectors z
i
is given by the
following equations, the evaluation theorem
z
0
= c
1
x
1
+ c
2
x
2
+ . . . + c
n
x
n
, (3.6.76)
and the eigenvalue problem
A x
i
=
i
x
i
, and p () = det (1 A) = 0. (3.6.77)
The n vectors z
0
till z
n
are iterated by
z
0
= z
0
, (3.6.78)
and
z
1
= A; z
0
,
z
1
= c
1
A; x
1
+ c
2
A; x
2
+ . . . + c
n
A; x
n
,
z
1
=
1
c
1
x
1
+
2
c
2
x
2
+ . . . +
n
c
n
x
n
, (3.6.79)
.
.
.
z
n
=
n
1
c
1
x
1
+
n
2
c
2
x
2
+ . . . +
n
n
c
n
x
n
, (3.6.80)
and nally summed like this, with a
n
= 1,
z
0
= z
0
[ a
0
+ z
1
=
1
c
1
x
1
+
2
c
2
x
2
+ . . . +
n
c
n
x
n
[ a
1
.
.
.
+ z
n
=
n
1
c
1
x
1
+
n
2
c
2
x
2
+ . . . +
n
n
c
n
x
n
[ 1
(3.6.81)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
3.6. Matrix Eigenvalue Problem 73
leads to the result
a
0
z
0
+ a
1
z
1
+ . . . + z
n
=
_
a
0
+ a
1

1
+ . . . + a
n1

n1
1
+
n
1
_
c
1
x
1
+
_
a
0
+ a
1

2
+ . . . + a
n1

n1
2
+
n
2
_
c
2
x
2
.
.
.
+
_
a
0
+ a
1

n
+ . . . + a
n1

n1
n
+
n
n
_
c
n
x
n
. (3.6.82)
With equations (3.6.71)-(3.6.73),
(a
0
1 + a
1
A + . . . + A
n
) z
0
= p (
1
) c
1
x
1
+ p (
2
) c
2
x
2
+ . . . + p (
n
) c
n
x
n
,
p (A) z
0
= 0 c
1
x
1
+ 0 c
2
x
2
+ . . . + 0 c
n
x
n
,
and nally
p (A) z
0
= a
0
z
0
+ a
1
z
1
+ . . . + z
n
= 0. (3.6.83)
Inserting the iterated vectors z
k
= a
k
z
0
, see equations (3.6.71)-(3.6.73), in equation (3.6.83)
leads to,
p (A) z
0
= a
0
z
0
+ a
1
A z
0
+ . . . + A
n
z
0
= 0, (3.6.84)
= (a
0
1 + a
1
A + . . . + A
n
) z
0
= 0, (3.6.85)
and with an arbitrary vector z
0
the term in brackets must equal the zero matrix,
a
0
1 + a
1
A + . . . + A
n
= 0. (3.6.86)
In other words, an arbitrary square matrix A solves its own characteristic equation. If the char-
acteristic polynomial of the matrix A is given by equation (3.6.74), then the matrix A solves the
so called Cayley-Hamilton equation,
p (A) = a
0
1 + a
1
A + . . . + A
n
= 0. (3.6.87)
The polynomial p (A) of the matrix A equals the zero matrix, and the a
i
are the coefcients of
the characteristic polynomial of matrix A,
p () = det (1 A) =
n
+ a
n1

n1
+ . . . + a
1
+ a
0
= 0. (3.6.88)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
74 Chapter 3. Matrix Calculus
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Chapter 4
Vector and Tensor Algebra
For example SIMMONDS [12], HALMOS [6], MATTHEWS [11], and ABRAHAM, MARSDEN, and
RATIU [1].
And in german DE BOER [3], STEIN ET AL. [13], and IBEN [7].
75
76 Chapter 4. Vector and Tensor Algebra
Chapter Table of Contents
4.1 Index Notation and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.1.1 The Summation Convention . . . . . . . . . . . . . . . . . . . . . . . 78
4.1.2 The Kronecker delta . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.1.3 The Covariant Basis and Metric Coefcients . . . . . . . . . . . . . . 80
4.1.4 The Contravariant Basis and Metric Coefcients . . . . . . . . . . . . 81
4.1.5 Raising and Lowering of an Index . . . . . . . . . . . . . . . . . . . . 82
4.1.6 Relations between Co- and Contravariant Metric Coefcients . . . . . . 83
4.1.7 Co- and Contravariant Coordinates of a Vector . . . . . . . . . . . . . 84
4.2 Products of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.2.1 The Scalar Product or Inner Product of Vectors . . . . . . . . . . . . . 85
4.2.2 Denition of the Cross Product of Base Vectors . . . . . . . . . . . . . 87
4.2.3 The Permutation Symbol in Cartesian Coordinates . . . . . . . . . . . 87
4.2.4 Denition of the Scalar Triple Product of Base Vectors . . . . . . . . . 88
4.2.5 Introduction of the Determinant with the Permutation Symbol . . . . . 89
4.2.6 Cross Product and Scalar Triple Product of Arbitrary Vectors . . . . . . 90
4.2.7 The General Components of the Permutation Symbol . . . . . . . . . . 92
4.2.8 Relations between the Permutation Symbols . . . . . . . . . . . . . . . 93
4.2.9 The Dyadic Product or the Direct Product of Vectors . . . . . . . . . . 94
4.3 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.3.1 Introduction of a Second Order Tensor . . . . . . . . . . . . . . . . . . 96
4.3.2 The Denition of a Second Order Tensor . . . . . . . . . . . . . . . . 97
4.3.3 The Complete Second Order Tensor . . . . . . . . . . . . . . . . . . . 99
4.4 Transformations and Products of Tensors . . . . . . . . . . . . . . . . . . . 101
4.4.1 The Transformation of Base Vectors . . . . . . . . . . . . . . . . . . . 101
4.4.2 Collection of Transformations of Basis . . . . . . . . . . . . . . . . . 103
4.4.3 The Tensor Product of Second Order Tensors . . . . . . . . . . . . . . 105
4.4.4 The Scalar Product or Inner Product of Tensors . . . . . . . . . . . . . 110
4.5 Special Tensors and Operators . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.5.1 The Determinant of a Tensor in Cartesian Coordinates . . . . . . . . . 112
4.5.2 The Trace of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.5.3 The Volumetric and Deviator Tensor . . . . . . . . . . . . . . . . . . . 113
4.5.4 The Transpose of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . 114
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Chapter Table of Contents 77
4.5.5 The Symmetric and Antisymmetric (Skew) Tensor . . . . . . . . . . . 115
4.5.6 The Inverse of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.5.7 The Orthogonal Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.5.8 The Polar Decomposition of a Tensor . . . . . . . . . . . . . . . . . . 117
4.5.9 The Physical Components of a Tensor . . . . . . . . . . . . . . . . . . 118
4.5.10 The Isotropic Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.6 The Principal Axes of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.6.1 Introduction to the Problem . . . . . . . . . . . . . . . . . . . . . . . 120
4.6.2 Components in a Cartesian Basis . . . . . . . . . . . . . . . . . . . . . 122
4.6.3 Components in a General Basis . . . . . . . . . . . . . . . . . . . . . 122
4.6.4 Characteristic Polynomial and Invariants . . . . . . . . . . . . . . . . 123
4.6.5 Principal Axes and Eigenvalues of Symmetric Tensors . . . . . . . . . 124
4.6.6 Real Eigenvalues of a Symmetric Tensors . . . . . . . . . . . . . . . . 124
4.6.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.6.8 The Eigenvalue Problem in a General Basis . . . . . . . . . . . . . . . 125
4.7 Higher Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.7.1 Review on Second Order Tensor . . . . . . . . . . . . . . . . . . . . . 127
4.7.2 Introduction of a Third Order Tensor . . . . . . . . . . . . . . . . . . . 127
4.7.3 The Complete Permutation Tensor . . . . . . . . . . . . . . . . . . . . 128
4.7.4 Introduction of a Fourth Order Tensor . . . . . . . . . . . . . . . . . . 128
4.7.5 Tensors of Various Orders . . . . . . . . . . . . . . . . . . . . . . . . 129
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
78 Chapter 4. Vector and Tensor Algebra
4.1 Index Notation and Basis
4.1.1 The Summation Convention
For a product the summation convention, invented by Einstein, holds if one index of summation
is a superscript index and the other one is a subscript index. This repeated index implies that the
term is to be summed from i = 1 to i = n in general,
n

i=1
a
i
b
i
= a
1
b
1
+ a
2
b
2
+ . . . + a
n
b
n
= a
i
b
i
, (4.1.1)
and for the special case of n = 3 like this,
3

j=1
v
j
g
j
= v
1
g
1
+ v
2
g
2
+ v
3
g
3
= v
j
g
j
, (4.1.2)
or even for two sufces
3

i=1
3

k=1
g
ik
u
i
v
k
= g
11
u
1
v
1
+ g
12
u
1
v
2
+ g
13
u
1
v
3
+ g
21
u
2
v
1
+ g
22
u
2
v
2
+ g
23
u
2
v
3
+ g
31
u
3
v
1
+ g
32
u
3
v
2
+ g
33
u
3
v
3
= g
ik
u
i
v
k
. (4.1.3)
The repeated index of summation is also called the dummy index. This means that changing the
index i to j or k or any other symbol does not infect the value of the sum. But is important to
notice, that it is not allowed to repeat an index more than twice! Another important thing to note
about index notation is the use of the free indices. The free indices in every term and on both
sides of an equation must match. For that reason the addition of two vectors could be written in
different ways, where a, b and c are vectors in the vector space V with the dimension n, and the
a
i
, b
i
and c
i
are their components,
a +b = c a
i
+ b
i
= c
i

_

_
a
1
+ b
1
= c
1
a
2
+ b
2
= c
2
.
.
.
a
n
+ b
n
= c
n
, a, b, c V. (4.1.4)
For the special case of Cartesian coordinates there holds another important convention. In this
case it is allowed to sum repeated subscript or superscript indices, in general for a Cartesian
coordinate system the subscript index is preferred,
3

i=1
x
i
e
i
= x
i
e
i
. (4.1.5)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.1. Index Notation and Basis 79
If the indices of two terms are written in brackets it is forbidden to sum these terms
v
(m)
g
(m)
,=
3

m=1
v
m
g
m
. (4.1.6)
4.1.2 The Kronecker delta
The Kronecker delta is dened by

i
j
=
i
j
=
ij
=
ij
=
_
1 if i = j
0 if i ,= j
. (4.1.7)
An index i, for example in a 3-dimensional space, is substituted with another index j by multi-
plication with the Kronecker delta,

j
i
v
j
=
3

j=1

j
i
v
j
=
1
i
v
1
+
2
i
v
2
+
3
i
v
3
= v
i
, (4.1.8)
or with a summation over two indices,

j
i
v
i
u
j
=
3

i=1
3

j=1

j
i
v
i
u
j
=
1
1
v
1
u
1
+
2
1
v
1
u
2
+
3
1
v
1
u
3
+
1
2
v
2
u
1
+
2
2
v
2
u
2
+
3
2
v
2
u
3
+
1
3
v
3
u
1
+
2
3
v
3
u
2
+
3
3
v
3
u
3
= 1 v
1
u
1
+ 0 v
1
u
2
+ 0 v
1
u
3
+ 0 v
2
u
1
+ 1 v
2
u
2
+ 0 v
2
u
3
+ 0 v
3
u
1
+ 0 v
3
u
2
+ 1 v
3
u
3

j
i
v
i
u
j
= v
1
u
1
+ v
2
u
2
+ v
3
u
3
= v
i
u
i
v u, (4.1.9)
or just for a Kronecker delta with two equal indices,

j
j
=
3

j=1

j
j
=
1
1
+
2
2
+
3
3
= 3, (4.1.10)
and for the scalar product of two Kronecker deltas,

j
i

k
j
=
3

j=1

j
i

k
j
=
1
i

k
1
+
2
i

k
1
+
3
i

k
3
=
k
i
. (4.1.11)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
80 Chapter 4. Vector and Tensor Algebra
For the special case of Cartesian coordinates the Kronecker delta is identied by the unit matrix
or identity matrix,
[
ij
] =
_
_
1 0 0
0 1 0
0 0 1
_
_
. (4.1.12)
4.1.3 The Covariant Basis and Metric Coefcients
In an n-dimensional afne vector space R
n
aff
E
n
= V a vector v is given by
v = v
i
g
i
, with v, g
i
V , and i = 1, 2, 3. (4.1.13)
The vectors g
i
are choosen as linear independent, i.e. they are a basis. If the index i is an
subscript index, the denitions
g
i
, covariant base vectors, (4.1.14)
and
v
i
, contravariant coordinates, (4.1.15)
of v with respect to the g
i
, hold. The v
1
g
1
, v
2
g
2
, v
3
g
3
are called the components of v. The
Scalar product of the base vectors g
i
and g
k
is dened by
g
i
g
k
= g
ik
(4.1.16)
= g
k
g
i
= g
ki
(4.1.17)
g
ik
= g
ki
(4.1.18)
and these coefcients are called the
g
ik
= g
ki
, covariant metric coefcients.
The metric coefcients are symmetric, because of the commutativity of the scalar product g
i

g
k
= g
k
g
i
. The determinant of the matrix of the covariant metric coefcients g
ik
,
g = det [g
ik
] (4.1.19)
is nonzero, if and only if the g
i
form a basis. For the Cartesian basis the metric coefcients
vanish except the ones for i = k and the coefcient matrix becomes the identity matrix or the
Kronecker delta
e
i
e
k
=
ik
=
_
1 i = k
0 i ,= k
. (4.1.20)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.1. Index Notation and Basis 81
4.1.4 The Contravariant Basis and Metric Coefcients
Assume a new reciprocal basis to the covariant base vectors g
i
by introducing the
g
k
, contravariant base vectors,
in the same space like the covariant base vectors. This contravariant base vectors are dened by
g
i
g
k
=
k
i
=
_
1 i = k
0 i ,= k
, (4.1.21)
and with the covariant coordinates v
i
the vector v is given by
v = v
i
g
i
, with v, g
i
V , and i = 1, . . . , n. (4.1.22)
For example in the 2-dimensional vector space E
2

g
1

g
2
g
2

g
1
Figure 4.1: Example of co- and contravariant base vectors in E
2
.
g
1
g
2
= 0 g
1
g
2
, (4.1.23)
g
2
g
1
= 0 g
2
g
1
, (4.1.24)
g
1
g
1
= 1, (4.1.25)
g
2
g
2
= 1. (4.1.26)
The scalar product of the contravariant base vectors g
i
,
g
i
g
k
= g
ik
(4.1.27)
= g
k
g
i
= g
ki
(4.1.28)
g
ik
= g
ki
(4.1.29)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
82 Chapter 4. Vector and Tensor Algebra
_
'

e
1
= e
1
e
2
= e
2
e
3
= e
3
Figure 4.2: Special case of a Cartesian basis.
denes the
g
ik
= g
ki
, contravariant metric coefcients.
For the special case of Cartesian coordinates and an orthonormal basis e
i
co- and contravariant
base vectors are equal. For that reason it is not necessary to differentiate between indices as
subscript or superscript indices. From now on Cartesian base vectors and Cartesian coordinates
get only indicies as subscript indices,
u = u
i
e
i
, or u = u
j
e
j
. (4.1.30)
4.1.5 Raising and Lowering of an Index
If the vectors g
i
, g
m
and g
k
are in the same space V, it must be possible to describe g
k
by a
product of g
m
and some coefcient like A
km
,
g
k
= A
km
g
m
. (4.1.31)
Both sides of the equations are multiplied with g
i
,
g
k
g
i
= A
km
g
m
g
i
, (4.1.32)
and with the denition of the Kronecker delta,
g
ki
= A
km

i
m
, (4.1.33)
g
ki
= A
ki
. (4.1.34)
The result is the following relation between co- and contravariant base vectors
g
k
= g
ki
g
i
. (4.1.35)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.1. Index Notation and Basis 83
Lemma 4.1. The covariant base vectors transform with the contravariant metric coefcients
into the contravariant base vectors.
g
k
= g
ki
g
i
Raising an index with the contravariant metric coefcients.
The same argumentation for the covariant metric coefcients starts with
g
k
= A
km
g
m
, (4.1.36)
g
k
g
i
= A
km

m
i
, (4.1.37)
g
ki
= A
ki
, (4.1.38)
and nally implies
g
k
= g
ki
g
i
. (4.1.39)
As a rule of thumb:
Lemma 4.2. The contravariant base vectors transform with the covariant metric coefcients
into the covariant base vectors.
g
k
= g
ki
g
i
Lowering an index with the covariant metric coefcients.
4.1.6 Relations between Co- and Contravariant Metric Coefcients
Both sides of the transformation formula
g
k
= g
km
g
m
, (4.1.40)
are multiplied with the vector g
i
g
k
g
i
= g
km
g
m
g
i
. (4.1.41)
Comparing this with the denitions of the Kronecker delta (4.1.7) and of the metric coefcients
(4.1.16) and (4.1.27) leads to

k
i
= g
km
g
mi
. (4.1.42)
Like in the expression A
1
A = 1 co- und contravariant metric coefcients are inverse to each
other. In matrix notation equation (4.1.42) denotes
1 =
_
g
km

[g
mi
] , (4.1.43)
_
g
km

= [g
mi
]
1
, (4.1.44)
and for the determinants
det
_
g
ik

=
1
det [g
ik
]
. (4.1.45)
With the denition of the determinant, equation (4.1.19), the determinant of the contravariant
metric coefcients gives
det [g
ik
] = g, (4.1.46)
and the determinant of the contravariant metric coefcients
det
_
g
ik

=
1
g
. (4.1.47)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
84 Chapter 4. Vector and Tensor Algebra
4.1.7 Co- and Contravariant Coordinates of a Vector
The vector v V is represented by the two expressions v = v
i
g
i
and v = v
k
g
k
. Comparing
these expressions, with respect to equation (4.1.39) g
i
= g
ik
g
k
, leads to
v
i
g
ik
g
k
= v
k
g
k
v
k
= g
ik
v
i
. (4.1.48)
After changing the indices of the symmetric covariant metric coefcient, like in equation (4.1.29),
the transformation from contravariant coordinates to covariant coordinates denotes like this
v
k
= g
ki
v
i
. (4.1.49)
In the same way comparing the contravariant vector g
k
= g
ik
g
i
with the equations (4.1.35) and
(4.1.18) gives
v
i
g
i
= v
k
g
ki
g
i
v
i
= g
ki
v
k
, (4.1.50)
and after changing the indices
v
i
= g
ik
v
k
. (4.1.51)
Lemma 4.3. The covariant coordinates transform like the covariant base vectors with the con-
travariant metric coefcients and vice versa. In index notation the transformation for the covari-
ant coordinates and the covariant base vectors looks like this
v
i
= g
ik
v
k
g
i
= g
ik
g
k
, (4.1.52)
and for the contravariant coordinates and the contravariant base vectors
v
k
= g
ki
v
i
g
k
= g
ki
g
i
. (4.1.53)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.2. Products of Vectors 85
4.2 Products of Vectors
4.2.1 The Scalar Product or Inner Product of Vectors
The scalar product of two vectors u and v V is denoted by
=< u [ v > u v , R, (4.2.1)
and also called the inner product or dot product of vectors. The vectors u and v are represented
by
u = u
i
g
i
and v = v
i
g
i
, (4.2.2)
with respect to the covariant base vectors g
i
V and i = 1, . . . , n or by
u = u
i
g
i
and v = v
i
g
i
. (4.2.3)
w.r.t. the contravariant base vectors g
j
and j = 1, . . . , n. By combining these representations the
scalar product of two vectors could be written in four variations
= u v = u
i
v
j
g
i
g
j
= u
i
v
j
g
ij
= u
i
v
i
, (4.2.4)
= u
i
v
j
g
i
g
j
= u
i
v
j
g
ij
= u
i
v
i
, (4.2.5)
= u
i
v
j
g
i
g
j
= u
i
v
j

.j
i
= u
i
v
i
, (4.2.6)
= u
i
v
j
g
i
g
j
= u
i
v
j

i
.j
= u
i
v
i
. (4.2.7)
The Euclidean norm is the connection between elements of the same dimension in a vector space.
The absolute values of the vectors u and v are represented by
[u[ = |u|
2
=

u u, (4.2.8)
[v[ = |v|
2
=

v v. (4.2.9)
The scalar product or inner product of two vectors in V is a bilinear mapping from two vectors
to R.
Theorem 4.1. In the 3-dimensional Euclidean vector space E
3
one important application of the
scalar product is the denition of the work as the force times the distance moved in the direction
opposite to the force,
Work = Force in direction of the distance Distance
or = f d . (4.2.10)
Theorem 4.2. The scalar product in 3-dimensional Euclidean vector space E
3
is written u v
and is dened as the product of the absolute values of the two vectors and the cosine of the angle
between them,
= u v := [u[ [v[ cos . (4.2.11)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
86 Chapter 4. Vector and Tensor Algebra

[v[ cos

v f

u d

e
u

Figure 4.3: Projection of a vector v on the dircetion of the vector u.


The quantity [v[ cos represents in the 3-dimensional Euclidean vector space E
3
the projection
of the vector v in the direction of vector u. The unit vector in direction of vector u is given by
e
u
=
u
[u[
. (4.2.12)
Therefore the cosine of the angle is given by
cos =
u v
[u[ [v[
. (4.2.13)
The absolute value of a vector is its Euclidean norm and is computed by
[u[ =

u u , and [v[ =

v v. (4.2.14)
This formula rewritten with the base vectors g
i
and g
i
simplies in index notation to
[u[ =
_
u
i
g
i
u
k
g
k
=
_
u
i
u
k

k
i
=
_
u
i
u
i
, (4.2.15)
[v[ =
_
v
i
v
i
. (4.2.16)
The cosine between two vectors in the 3-dimensional Euclidean vector space E
3
is dened by
cos =
u
i
v
i
_
u
j
u
j

v
k
v
k
=
u
i
v
i
_
u
j
u
j

v
k
v
k
. (4.2.17)
For example the scalar product of two vectors w.r.t. the Cartesian basis g
i
= e
i
= e
i
in the
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.2. Products of Vectors 87
3-dimensional Euclidean vector space E
3
,
_
_
6
3
7
_
_

_
_
1
2
3
_
_
=6 1 + 3 2 + 7 3 = 33,
=u
i
v
j
g
ij
= u
1
v
j
g
1j
+ u
2
v
j
g
2j
+ u
3
v
j
g
3j
,
=6 1 1 + 6 2 0 + 6 3 0,
+ 3 1 0 + 3 2 1 + 3 3 0,
+ 7 1 0 + 7 2 0 + 7 3 1 = 33.
4.2.2 Denition of the Cross Product of Base Vectors
The cross product, also called the vector product, or the outer product, is only dened in the
3-dimensional Euclidean vector space E
3
. The cross product of two arbitrary, linear independent
covariant base vectors g
i
, g
j
E
3
implies another vector g
k
E
3
and is introduced by
g
i
g
j
= g
k
, (4.2.18)
with the conditions
i ,= j ,= k,
i, j, k = 1, 2, 3 , or another even permutation of i, j, k.
4.2.3 The Permutation Symbol in Cartesian Coordinates
The cross products of the Cartesian base vectors e
i
in the 3-dimensional Euclidean vector space
E
3
are given by
e
1
e
2
= e
3
= e
3
, (4.2.19)
e
2
e
3
= e
1
= e
1
, (4.2.20)
e
3
e
1
= e
2
= e
2
, (4.2.21)
e
2
e
1
= e
3
= e
3
, (4.2.22)
e
3
e
2
= e
1
= e
1
, (4.2.23)
e
1
e
3
= e
2
= e
2
. (4.2.24)
The Cartesian components of a permutation tensor , or just the permutation symbols, are dened
by
e
ijk
=
_

_
+1 if (i, j, k) is an even permutation of (1, 2, 3),
1 if (i, j, k) is an odd permutation of (1, 2, 3),
0 if two or more indices are equal.
(4.2.25)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
88 Chapter 4. Vector and Tensor Algebra
Thus, returning to equations (4.2.19)-(4.2.24), the cross products of the Cartesian base vectors
could be described by the permutation symbols like this,
e
i
e
j
= e
ijk
e
k
. (4.2.26)
For example
e
1
e
2
= e
121
e
1
+ e
122
e
2
+ e
123
e
3
= 0 e
1
+ 0 e
2
+ 1 e
3
= e
3
e
1
e
3
= e
131
e
1
+ e
132
e
2
+ e
133
e
3
= 0 e
1
+ (1) e
2
+ 0 e
3
= e
2
.
4.2.4 Denition of the Scalar Triple Product of Base Vectors
Starting again with the cross product of base vectors, see equation (4.2.18),
g
i
g
j
= g
k
, (4.2.27)
i ,= j ,= k,
i, j, k = 1, 2, 3 , or another even permutation of i, j, k.
The g
k
are the contravariant base vectors and the scalar quantity is computed by multiplication
of equation (4.2.18) with the covariant base vector g
k
,
(g
i
g
j
) g
k
= g
k
g
k
, (4.2.28)
[g
1
, g
2
, g
3
] =
k
k
= 3. (4.2.29)
This result is the so called scalar triple product of the base vectors
= [g
1
, g
2
, g
3
] . (4.2.30)
This scalar triple product of the base vectors g
i
for i = 1, 2, 3 represents the volume of the
parallelepiped formed by the three vectors g
i
for i = 1, 2, 3. Comparing equations (4.2.28) and
(4.2.29) implies for contravariant base vectors
g
k
=
g
i
g
j
[g
1
, g
2
, g
3
]
, (4.2.31)
and for covariant base vectors
g
k
=
g
i
g
j
[g
1
, g
2
, g
3
]
. (4.2.32)
Furthermore the scalar product of two scalar triple products of base vectors is given by
[g
1
, g
2
, g
3
]
_
g
1
, g
2
, g
3

=
1

= 1. (4.2.33)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.2. Products of Vectors 89
4.2.5 Introduction of the Determinant with the Permutation Symbol
The scalar quantity in the section above could also be described by the square root of the
determinant of the covariant metric coefcients,
= (det g
ij
)
1
2
=

g. (4.2.34)
The determinant of a 3 3 matrix could be represented by the permutations symbols e
ijk
,
= det [a
mn
] =

a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33

= a
1i
a
2j
a
3k
e
ijk
. (4.2.35)
Computing the determinant by expanding about the rst row implies
= a
11

a
22
a
23
a
32
a
33

a
12

a
21
a
23
a
31
a
33

+ a
13

a
21
a
22
a
31
a
32

and nally
= a
11
a
22
a
33
a
11
a
32
a
23
a
12
a
21
a
33
+ a
12
a
31
a
23
+ a
13
a
21
a
32
a
13
a
31
a
22
. (4.2.36)
The alternative way with the permutation symbol is given by
= a
11
a
22
a
33
e
123
+ a
11
a
23
a
32
e
132
+ a
12
a
23
a
31
e
231
+ a
12
a
21
a
33
e
213
+ a
13
a
21
a
32
e
312
+ a
13
a
22
a
31
e
321
,
and after inserting the values of the various permutation symbols,
= a
11
a
22
a
33
1 + a
11
a
23
a
32
(1)
+ a
12
a
23
a
31
1 + a
12
a
21
a
33
(1)
+ a
13
a
21
a
32
1 + a
13
a
22
a
31
(1)
and nally the result is equal to the rst way of computing the determinant, see equation (4.2.36),
= a
11
a
22
a
33
a
11
a
23
a
32
+ a
12
a
23
a
31
a
12
a
21
a
33
+ a
13
a
21
a
32
a
13
a
22
a
31
. (4.2.37)
Equations (4.2.35) can be written with contravariant elements, too,

= a
1i
a
2j
a
3k
e
ijk
. (4.2.38)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
90 Chapter 4. Vector and Tensor Algebra
The matrix of the covariant metric coefcients is the inverse of the matrix of the contravariant
metric coefcients and vice versa,
g
ij
g
jk
=
k
i
, (4.2.39)
det
_
g
ij
g
jk

= det
_
[g
ij
]
_
g
jk
_
= det
_

k
i

= 1. (4.2.40)
The product rule of determinants
det
_
[g
ij
]
_
g
jk
_
= det [g
ij
] det
_
g
jk

(4.2.41)
simplies for this special case
1 = g
1
g
, (4.2.42)
and nally
det [g
ij
] = g det
_
g
ij

=
1
g
. (4.2.43)
For this reason the determinants of the matrix of the metric coefcients are represented with the
permutation symbols, see equations (4.2.35) and (4.2.43), like this
g = g
1i
g
2j
g
3k
e
ijk
= det [g
ij
] , (4.2.44)
1
g
= g
1i
g
2j
g
3k
e
ijk
= det
_
g
ij

. (4.2.45)
4.2.6 Cross Product and Scalar Triple Product of Arbitrary Vectors
The vectors a up to f are written in the 3-dimensional Euclidean vector space E
3
with the base
vectors g
i
and g
i
,
a = a
i
g
i
d = d
i
g
i
, (4.2.46)
b = b
i
g
i
e = e
i
g
i
, (4.2.47)
c = c
i
g
i
f = f
i
g
i
. (4.2.48)
The cross product (4.2.26) rewritten with the formulae for the scalar triple product (4.2.28) -
(4.2.30),
a b = a
i
g
i
b
j
g
j
= a
i
b
j
e
ijk
[g
1
, g
2
, g
3
] g
k
,
a b = [g
1
, g
2
, g
3
]

a
1
a
2
a
3
b
1
b
2
b
3
g
1
g
2
g
3

= [g
1
, g
2
, g
3
]

g
1
g
2
g
3
a
1
a
2
a
3
b
1
b
2
b
3

. (4.2.49)
Two scalar triple products are dened by
[a, b, c] = (a b) c, (4.2.50)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.2. Products of Vectors 91
and
[d, e, f ] = (d e) f . (4.2.51)
and the rst one of this scalar triple product is given by
(a b) c = [g
1
, g
2
, g
3
] a
i
b
j
e
ijk
g
k
c
r
g
r
= [g
1
, g
2
, g
3
] a
i
b
j
e
ijk

k
r
c
r
= [g
1
, g
2
, g
3
] a
i
b
j
c
k
e
ijk
=
_
g
1
, g
2
, g
3

a
1
a
2
a
3
b
1
b
2
b
3
c
1
c
2
c
3

= [g
1
, g
2
, g
3
]

a
1
b
1
c
1
a
2
b
2
c
2
a
3
b
3
c
3

a
1
b
1
c
1
a
2
b
2
c
2
a
3
b
3
c
3

. (4.2.52)
The same formula written with covariant components and contravariant base vectors,
(a b) c =
_
g
1
, g
2
, g
3

a
i
b
j
c
k
e
ijk
=
1

a
i
b
j
c
k
e
ijk
=
_
g
1
, g
2
, g
3

a
1
a
2
a
3
b
1
b
2
b
3
c
1
c
2
c
3

= [g
1
, g
2
, g
3
]

a
1
b
1
c
1
a
2
b
2
c
2
a
3
b
3
c
3

. (4.2.53)
The product
P = [a, b, c] [d, e, f ] (4.2.54)
is therefore with the equations (4.2.52), (4.2.53) and (4.2.46) up to (4.2.48)
P = [g
1
, g
2
, g
3
]
_
g
1
, g
2
, g
3

a
1
a
2
a
3
b
1
b
2
b
3
c
1
c
2
c
3

d
1
e
1
f
1
d
2
e
2
f
2
d
3
e
3
f
3

=
1

[A[ [B[ = [A[ [B[ . (4.2.55)


The element (1, 1) of the product matrix A B with respect to the product rule of determinants
det Adet B = det (A B) is given by
a
1
d
1
+ a
2
d
2
+ a
3
d
3
= a
i
g
i
d
j
g
j
= a
i
d
j

j
i
= a
i
d
i
= a d. (4.2.56)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
92 Chapter 4. Vector and Tensor Algebra
Comparing this with the product P leads to
P = [a, b, c] [d, e, f ] =

a d a e a f
b d b e b f
c d c e c f

, (4.2.57)
and for the scalar triple product [a, b, c] to the power two,
[a, b, c]
2
=

a a a b a c
b a b b b c
c a c b c c

. (4.2.58)
4.2.7 The General Components of the Permutation Symbol
The square value of a scalar triple product of the covariant base vectors, like in equations (4.2.58),
[g
1
, g
2
, g
3
]
2
=

g
1
g
1
g
1
g
2
g
1
g
3
g
2
g
1
g
2
g
2
g
2
g
3
g
3
g
1
g
3
g
2
g
3
g
3

= [g
ij
[ = det [g
ij
] = g (4.2.59)
reduces to
[g
1
, g
2
, g
3
] =

g. (4.2.60)
The same for relation for the scalar triple product of the contravariant base vectors leads to
_
g
1
, g
2
, g
3

2
= det
_
g
ij

=
1
g
(4.2.61)
_
g
1
, g
2
, g
3

=
1

g
. (4.2.62)
Equation (4.2.60) could be rewritten analogous to equation (4.2.26)
g
i
g
j
= [g
1
, g
2
, g
3
] g
k
=

g e
ijk
g
k
,
g
i
g
j
=
ijk
g
k
, (4.2.63)
and for the corresponding contravariant base vectors
g
i
g
j
=
_
g
1
, g
2
, g
3

g
k
=
1

g
e
ijk
g
k
,
g
i
g
j
=
ijk
g
k
. (4.2.64)
For example the general permutation symbol could be given by the covariant symbol,

ijk
=
_

_
+

g if (i, j, k) is an even permutation of (1, 2, 3),

g if (i, j, k) is an odd permutation of (1, 2, 3),


0 if two or more indices are equal,
(4.2.65)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.2. Products of Vectors 93
or by the contravariant symbol,

ijk
=
_

_
+
1

g
if (i, j, k) is an even permutation of (1, 2, 3),

g
if (i, j, k) is an odd permutation of (1, 2, 3),
0 if two or more indices are equal.
(4.2.66)
4.2.8 Relations between the Permutation Symbols
Comparing equations (4.2.25) with (4.2.65) and (4.2.66) shows the relations,

ijk
=

ge
ijk
, and e
ijk
=
1

ijk
, (4.2.67)
and

ijk
=
1

g
e
ijk
, and e
ijk
=

g
ijk
. (4.2.68)
The comparison of equation (4.2.44) and (4.2.35) gives
g = [g
ij
[ = g
1i
g
2j
g
3k
e
ijk
, (4.2.69)
g e
lmn
= g
li
g
mj
g
nk
e
ijk
, (4.2.70)
g
1

lmn
= g
li
g
mj
g
nk

g
ijk
, (4.2.71)

lmn
= g
li
g
mj
g
nk

ijk
, (4.2.72)
and

lmn
= g
li
g
mj
g
nk

ijk
. (4.2.73)
The covariant symbols are converted into the contravariant symbols with the contravariant
metric coefcients and vice versa. This transformation is the same as the one for tensors. The
conclusion is that the symbols are tensors! The relation between the e and symbols is written
as follows
e
ijk
e
lmn
=

ijk

lmn
, (4.2.74)
e
ijk
e
lmn
=
ijk

lmn
. (4.2.75)
The relation between the permutation symbols and the Kronecker delta is given by

i
l

i
m

i
n

j
l

j
m

j
n

k
l

k
m

k
n

=
ijk

lmn
= e
ijk
e
lmn
. (4.2.76)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
94 Chapter 4. Vector and Tensor Algebra
After expanding the determinant and setting k = n,

ijk

lmk
=
i
l

j
m

i
m

j
l
, (4.2.77)
and if i = l and j = m

ijk

ijn
= 2
k
n
, (4.2.78)
and if all three indices are equal

ijk

ijk
= e
ijk
e
ijk
= 2
k
k
= 6. (4.2.79)
4.2.9 The Dyadic Product or the Direct Product of Vectors
The dyadic product of two vectors a and b V denes a so called simple second order tensor
with rank = 1 in the tensor space V V

over the vector space V by

T = a b , and

T V V

. (4.2.80)
This tensor describes a linear mapping of the vector v V with the scalar product by

T v = (a b) v = a (b v) . (4.2.81)
The dyadic product a b could be represented by a matrix, for example with a, b R
3
and
T R
3
R
3
,

T = a b
T
=
_
_
a
1
a
2
a
3
_
_
31
_
b
1
b
2
b
3

13
(4.2.82)
=
_
_
a
1
b
1
a
1
b
2
a
1
b
3
a
2
b
1
a
2
b
2
a
2
b
3
a
3
b
1
a
3
b
2
a
3
b
3
_
_
33
. (4.2.83)
The rank of this mapping is rank = 1, i.e. det

T
(33)
= 0 and det

T
i
(22)
= 0 for i = 1, 2, 3. The
mapping

T v denotes in matrix notation
_
_
a
1
b
1
a
1
b
2
a
1
b
3
a
2
b
1
a
2
b
2
a
2
b
3
a
3
b
1
a
3
b
2
a
3
b
3
_
_
_
_
v
1
v
2
v
3
_
_
=
_

_
a
1
3

i=1
b
i
v
i
a
2
3

i=1
b
i
v
i
a
3
3

i=1
b
i
v
i
_

_
, (4.2.84)
or
_
a b
T
_
v =

T v = a
_
b
T
v
_
. (4.2.85)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.2. Products of Vectors 95
The proof that the dyadic product is a tensor starts with the assumption, see equations (T4) and
(T5),
(a b) (u + v) = (a b) u + (a b) v. (4.2.86)
Equation (4.2.86) rewritten with the mapping

T is given by

T(u + v) =

T(u) +

T(v) . (4.2.87)
With the denitions of the mapping

T it follows, that

T(u + v) = (a b) (u + v) = a [b (u + v)]
= a [b u + b v]
= [a (b u)] + [a (b v)]
=

T(u) +

T(v) .
The vectors a and b are represented by the base vectors g
i
(covariant) and g
j
(contravariant)
a = a
i
g
i
, b = b
j
g
j
, and g
i
, g
j
V. (4.2.88)
The dyadic product i.e. the mapping

T is dened by

T = a b = a
i
b
j
g
i
g
j
=

T
i
j
g
i
g
j
, (4.2.89)
and by
g
i
g
j
the dyadic product of the base vectors,
with the conditions
det

T
i
j
= 0 , r
_

T
i
j
_
= 1 , and rank = 1. (4.2.90)
The mapping

T maps the vector v = v
k
g
k
onto the vector

w,

w =

T v =

T
i
j
_
g
i
g
j
_
v
k
g
k
=

T
i
j
v
k
g
i
_
g
j
g
k
_
=

T
i
j
v
k
g
i

j
k
,

w =

T
i
j
v
j
g
i
=

w
i
g
i
. (4.2.91)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
96 Chapter 4. Vector and Tensor Algebra
4.3 Tensors
4.3.1 Introduction of a Second Order Tensor
With the denition of linear mappings f in chapter (2.8), the deniton of vector spaces of linear
mappings L over the vector space V, and the denition of dyads it is possible to dene the second
order tensor. The original denition of a tensor was the description of a stress state at a point
and time in a continuum, e.g. a uid or a solid, given by Cauchy. The stress tensor or the Cauchy
stress tensor T at a point P assigns a stress vector (P) to an arbitrarily oriented section,
given by a normal vector at the point P. The resulting stress vector t
(n)
(P) at an arbitrarily
n
'
t
(n)
(P)
P
Figure 4.4: Resulting stress vector.
oriented section, described by a normal vector n at a point P, in a rigid body loaded by an
equilibrium system of external forces could have an arbitrary direction! Because the equilibrium
conditions hold only for forces and not for stresses, an equlibirum system of forces is established
at an innitesimal tetrahedron. This tetrahedron will have four innitesimal section surfaces,
too. If the section surface is rotated, then the element of reference (the vector of direction)
will be transformed and the direction of stress will be transformed, too. Comparing this with the
transformation of stresses yields to products of cosines, which lead to quantities with two indices.
The stress state at a point could not be described by one or two vectors, but by a combination
of three vectors t
(1)
, t
(2)
, and t
(3)
. The stress tensor T for the equilibrium conditions for a
innitesimal tetrahedron, given by the three stress vectors t
1
, t
2
, and t
3
, assigns to every direction
a unique resulting stress vector t
(n)
.
4.3.1.0.20 Remarks
The scalar product F n = F
n
; [n[ = 1 projects F cos on the direction of n, and the
result is a scalar quantity.
The cross product rF = M
A
establishs a vector of momentum at a point Ain the normal
direction of the plane r
A
, F and perpendicular to F, too.
The dyadic product a b = T assigns a second order tensor T to a pair of vectors a and
b.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.3. Tensors 97
_
'

x
2
x
3
x
1

dF
(n)
= t
(n)
dA
(n)

dF
(1)
= t
(1)
dA
(1)

n
(1)
`
dF
(2)
= t
(2)
dA
(2)
_ n
(2)

dF
(3)
= t
(3)
dA
(3)
'
n
(3)
dA
(n)
dA
(1)
dA
(2)
dA
(3)
Figure 4.5: Resulting stress vector.
The spaces R
3
and E
3
are homeomorphic, i.e. for all vectors x R
3
and v E
3
the same
rules and axioms hold. For this reason it is sufcient to have a look at the vector space R
3
.
Also the spaces R
n
, E
n
and V are homeomorphic, but with n ,= 3 the usual cross product
will not hold. For this reason the following denitions are made for the general vector
space V, but most of the examples are given in the 3-dimensional Euclidean vector space
E
3
. In this space the cross product holds, and this space is the visual space.
4.3.2 The Denition of a Second Order Tensor
A linear mapping f = T of an (Euclidean) vector space V into itself or into its dual space V

is called a second order tensor. The action of a linear mapping T on a vector v is written like a
"dot"-product or multiplication, and in most cases the "dot"is not written any more,
T v = Tv. (4.3.1)
The denitions and rules for linear spaces in chapter (2.4), i.e. the axioms of vector space (S1)
up to (S8) are rewritten for tensors T V V.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
98 Chapter 4. Vector and Tensor Algebra
4.3.2.0.21 Linearity for the Vectors.
1 . Axiom of Second Order Tensors. The tensor (the linear mapping) T V V maps the
vector u V onto the same space V,
T(u) = T u = Tu = v ; u V ; v V. (T1)
This mapping is the same like the mapping of a vector with a quadratic matrix in a space with
Cartesian coordinates.
2 . Axiom of Second Order Tensors. The action of the zero tensor 0 on any vector u maps the
vector on the zero vector,
0 u = 0u = 0 ; u, 0 V. (T2)
3 . Axiom of Second Order Tensors. The unit tensor 1 sends any vector into itself,
1 u = 1u = u ; u V, 1 V V. (T3)
4 . Axiom of Second Order Tensors. The multiplication by a tensor is distributive with respect
to vector addition,
T(u +v) = Tu +Tv ; u, v V. (T4)
5 . Axiom of Second Order Tensors. If the vector u is multiplied by a scalar, then the linear
mapping is denoted by
T(u) = Tu ; u V, R . (T5)
4.3.2.0.22 Linearity for the Tensors.
6 . Axiom of Second Order Tensors. The multiplication with the sum of tensors of the same
space is distributive,
(T
1
+T
2
) u = T
1
u +T
2
u ; u V, T
1
, T
2
V V. (T6)
7 . Axiom of Second Order Tensors. The multiplication of a tensor by a scalar is linear, like
in equation (T5) the multiplication of a vector by a scalar,
(T) u = T (u) ; u V, R . (T7)
8 . Axiom of Second Order Tensors. The action of tensors on a vector is associative,
T
1
(T
2
u) = (T
1
T
2
) u = T u, (T8)
but like in matrix calculus not commutative, i.e. T
1
T
2
,= T
2
T
1
. The "product"T
1
T
2
of the
tensors is also called a "composition"of the linear mappings T
1
, T
2
.
9 . Axiom of Second Order Tensors. The inverse of a tensor T
1
is dened by
v = T u u = T
1
v, (T9)
and it exists, if and only if T is linear independent, i.e. det T ,= 0.
10 . Axiom of Second Order Tensors. The transpose of the transpose is the tensor itself,
_
T
T
_
T
= T. (T10)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.3. Tensors 99
4.3.3 The Complete Second Order Tensor
The simple second order tensor T VV is dened as a linear combination of n dyads and its
rank is n,
T =
n

i=1

T
i
=
n

i=1
a
i
b
i
, (4.3.2)
T = a
i
b
i
= a
i
b
i
, (4.3.3)
and det T ,= 0 , if the vectors a
i
and b
i
are linearly independent.
If the vectors a
i
and b
i
are represented with the base vectors g
i
and g
i
V like this,
a
i
= a
ij
g
j
; b
i
= b
il
g
l
, (4.3.4)
then the second order tensor is given by
T = a
ij
b
il
g
j
g
l
, (4.3.5)
and nally the complete second order tensor is given by
T = T
j
l
g
j
g
l
the mixed formulation of a second order tensor. (4.3.6)
The dyadic product of the base vectors includes one co- and one contravariant base vector. The
mixed components T
j
l
of the tensor in mixed formulation are written with one co- and one con-
travariant index, too,
det T
j
l
,= 0. (4.3.7)
If the contravariant base vector is transformed with the metric coefcient ,
g
l
= g
lk
g
k
, (4.3.8)
the tensor T changes, like
T = T
j
l
g
j
g
lk
g
k
, (4.3.9)
T = T
j
l
g
lk
g
j
g
k
, (4.3.10)
and the result is
T = T
jk
g
j
g
k
the tensor with covariant base vectors
and contravariant coordinates.
(4.3.11)
The transformation of a covariant base vector in a contravariant base vector,
g
j
= g
jk
g
k
, (4.3.12)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
100 Chapter 4. Vector and Tensor Algebra
implies
T = T
j
l
g
jk
g
k
g
l
, (4.3.13)
T = T
kl
g
k
g
l
the tensor with contravariant base vectors
and covariant coordinates.
(4.3.14)
The action of the tensor T V V on the vector
g = v
k
g
k
V, (4.3.15)
creates the vector w V, and this one is computed by
w = T v = T
ij
(g
i
g
j
) v
k
g
k
(4.3.16)
= T
ij
v
k
g
i
g
jk
= T
ij
v
j
g
i
,
w = w
i
g
i
and w
i
= T
ij
v
j
. (4.3.17)
In the same way the other representation of the vector w is given by
w = T v = T
j
i
_
g
i
g
j
_
v
k
g
k
(4.3.18)
= T
j
i
v
k
g
i

k
j
= T
j
i
v
j
g
i
,
w = w
i
g
i
, and w
i
= T
j
i
v
j
(4.3.19)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.4. Transformations and Products of Tensors 101
4.4 Transformations and Products of Tensors
4.4.1 The Transformation of Base Vectors
A vector v with v V is given by the covariant basis g
i
and i = 1, . . . , n and afterwards in
another covariant basis g
i
and i = 1, . . . , n. For example this case describes the situation of a
solid body with different congurations of deformation and a tangent basis, which moves along
the coordinate curves. And the same denitions are made with a contravariant basis g
i
and a
transformed contravariant basis g
i
. Then the representations of the vector v are
v = v
i
g
i
= v
i
g
i
= v
i
g
i
= v
i
g
i
. (4.4.1)
The relation between the two covariant base vectors g
i
and g
i
could be written with a second
order tensor like
g
i
= A g
i
. (4.4.2)
If this linear mapping exists, then the coefcients of the transformation tensor Aare given by
g
i
= 1g
i
=
_
g
k
g
k
_
g
i
(4.4.3)
=
_
g
k
g
i
_
g
k
= A
k
i
g
k
,
g
i
= A
k
i
g
k
, and A
k
i
= g
k
g
i
. (4.4.4)
The complete tensor Ain the mixed formulation is then dened by
A =
_
g
k
g
i
_
g
k
g
i
= A
k
i
g
k
g
i
. (4.4.5)
Insert equation (4.4.5) in (4.4.2), in order to get the transformation (4.4.4) again,
g
m
=
_
A
k
i
g
k
g
i
_
g
m
= A
k
i

i
m
g
k
= A
k
m
g
k
. (4.4.6)
If the inverse transformation of equation (4.4.2) exists, then it should be denoted by
g
i
= Ag
i
. (4.4.7)
This existence results out of its linear independence. The "retransformation"tensor A is again
dened by the multiplication with the unit tensor 1
g
i
= 1g
i
=
_
g
k
g
k
_
g
i
(4.4.8)
=
_
g
k
g
i
_
g
k
= A
k
i
g
k
,
g
i
= A
k
i
g
k
, and A
k
i
= g
k
g
i
. (4.4.9)
The transformation tensor Ain the mixed representation is given by
A =
_
g
k
g
i
_
g
k
g
i
= A
k
i
g
k
g
i
. (4.4.10)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
102 Chapter 4. Vector and Tensor Algebra
Inserting equation (4.4.10) in (4.4.7) implies again the transformation relation (4.4.9),
g
m
=
_
A
k
i
g
k
g
i
_
g
m
= A
k
i

i
m
g
k
= A
k
m
g
k
. (4.4.11)
The tensor Ais the inverse to Aand vice versa. This is a result of equation (4.4.7) ,
A
1
[ g
i
= A g
i
, (4.4.12)
A
1
g
i
= g
i
. (4.4.13)
Comparing this with equation (4.4.2) implies
A = A
1
A A = 1, (4.4.14)
and in the same way
A = A
1
A A = 1. (4.4.15)
In index notation with equations (4.4.4) and (4.4.9) the relation between the "normal"and the
"overlined"coefcients of the transformation tensor is given by
g
i
= A
k
i
g
k
= A
k
i
A
m
k
g
m
[ g
j
, (4.4.16)

j
i
= A
k
i
A
m
k

j
m
,

j
i
= A
k
i
A
j
k
. (4.4.17)
The transformation of contravariant basis works in the same way. If in equations (4.4.3) or
(4.4.8) the metric tensor of covariant coefcients is used in stead of the identity tensor, then
another representation of the transformation tensor is described by
g
m
= 1g
m
=
_
g
ik
g
i
g
k
_
g
i
(4.4.18)
= g
ik
A
k
m
g
i
,
g
m
= A
im
g
i
, and A
im
= g
ik
A
k
m
. (4.4.19)
If the transformed covariant base vectors g
m
should be represented by the contravariant base
vectors g
i
, then the complete tensor of transformation is given by
A = (g
i
g
k
) g
i
g
k
= A
ik
g
i
g
k
. (4.4.20)
The inverse transformation tensor T is given by an equation developed in the same way like the
equations (4.4.16) and (4.4.8). This inverse tensor is denoted and dened by
A = A
1
= (g
i
g
k
) g
i
g
k
= A
ik
g
i
g
k
. (4.4.21)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.4. Transformations and Products of Tensors 103
4.4.2 Collection of Transformations of Basis
There is a large number of transformation relations between the co- and contravariant basis of
both systems of coordinates. Transformation from the "normal basis"to the "overlined basis",
like g
i
g
i
and g
k
g
k
, is given by the following equations. First the relation between the
covariant base vectors in both systems of coordinates is dened by
g
i
= 1g
i
=
_
g
k
g
k
_
g
i
=
_
g
k
g
i
_
g
k
= A
k
i
g
k
. (4.4.22)
With this relationship the transformed (overlined) covariant base vectors are represented by the
covariant base vectors
g
i
= Ag
i
, and A =
_
g
k
g
m
_
g
k
g
m
= A
k
m
g
k
g
m
, (4.4.23)
g
i
=
_
g
k
g
i
_
g
k
= A
k
i
g
k
, (4.4.24)
and the transformed (overlined) covariant base vectors are represented by the contravariant base
vectors,
g
i
= Ag
i
, and A = (g
k
,g
m
) g
k
g
m
= A
km
g
k
g
m
, (4.4.25)
g
i
= (g
k
g
i
) g
k
= A
ki
g
k
. (4.4.26)
The relation between the contravariant base vectors in both systems of coordinates is dened by
g
i
= 1g
i
=
_
g
k
g
k
_
g
i
=
_
g
k
g
i
_
g
k
= B
i
k
g
k
. (4.4.27)
With this relationship the transformed (overlined) contravariant base vectors are represented by
the contravariant base vectors
g
i
= Bg
i
, and B = (g
k
g
m
) g
k
g
m
= B
m
k
g
k
g
m
, (4.4.28)
g
i
=
_
g
k
g
i
_
g
k
= B
i
k
g
k
, (4.4.29)
and the transformed (overlined) contravariant base vectors are represented by the covariant base
vectors,
g
i
= Bg
i
, and B =
_
g
k
g
m
_
g
k
g
m
= B
km
g
k
g
m
, (4.4.30)
g
i
=
_
g
k
g
i
_
g
k
= B
ki
g
k
. (4.4.31)
The inverse relation g
i
g
i
, and g
k
g
k
representing the "retransformations"from the trans-
formed (overlined) to the "normal"system of coordinates are given by the following equations.
The inverse transformation between the covariant base vectors of both systems of coordinates is
denoted and dened by
g
i
= 1g
i
=
_
g
k
g
k
_
g
i
=
_
g
k
g
i
_
g
k
= A
k
i
g
k
. (4.4.32)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
104 Chapter 4. Vector and Tensor Algebra
With this relationship the covariant base vectors are represented by the transformed (overlined)
covariant base vectors,
g
i
= Ag
i
, and A =
_
g
k
g
m
_
g
k
g
m
= A
k
m
g
k
g
m
, (4.4.33)
g
i
=
_
g
k
g
i
_
g
k
= A
k
i
g
k
, (4.4.34)
and the covariant base vectors are represented by the transformed (overlined) contravariant base
vectors,
g
i
= Ag
i
, and A = (g
m
g
k
) g
m
g
k
= A
mk
g
m
g
k
, (4.4.35)
g
i
= (g
k
g
i
) g
k
= A
ki
g
k
. (4.4.36)
The inverse relation between the contravariant base vectors in both systems of coordinates is
dened by
g
i
= 1g
i
=
_
g
k
g
k
_
g
i
=
_
g
k
g
i
_
g
k
= B
i
k
g
k
. (4.4.37)
With this relationship the contravariant base vectors are represented by the transformed (over-
lined) contravariant base vectors,
g
i
= Bg
i
, and B = (g
k
g
m
) g
k
g
m
= B
m
k
g
k
g
m
, (4.4.38)
g
i
=
_
g
k
g
i
_
g
k
= B
i
k
g
k
, (4.4.39)
and the contravariant base vectors are represented by the transformed (overlined) covariant base
vectors,
g
i
= Bg
i
, and B =
_
g
k
g
m
_
g
k
g
m
= B
km
g
k
g
m
, (4.4.40)
g
i
=
_
g
k
g
i
_
g
k
= B
ki
g
k
. (4.4.41)
There exist the following relations between the transformation tensors Aand A,
AA = 1 , or AA = 1 (4.4.42)
A
m
i
A
k
m
=
k
i
, i.e. A
m
i
A
k
m
=
k
i
(4.4.43)
and for the inverse transformation tensors B and B
BB = 1 , or BB = 1 (4.4.44)
B
i
m
B
m
k
=
i
k
, i.e. B
i
m
B
m
k
=
i
k
. (4.4.45)
Furthermore exists a relation between the transformation tensors A and the retransformation
tensor B
A
m
i
B
k
m
=
k
i
; B
k
m
= A
k
m
, (4.4.46)
and a relation between the transformation tensors B and the retransformation tensor A,
B
k
m
= A
k
m
. (4.4.47)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.4. Transformations and Products of Tensors 105
4.4.3 The Tensor Product of Second Order Tensors
The vector v is dened by the action of the linear mapping given by T on the vector u,
v = T u = Tu , with u, v V , and T V V

. (4.4.48)
In index notation with the covariant base vectors g
i
V equation (4.4.48) denotes like that
v =
_
T
mk
g
m
g
k
_
(u
r
g
r
)
= T
mk
u
r
(g
k
g
r
) g
m
,
and with lowering an index, see (4.1.39)
= T
mk
u
r
g
kr
g
m
v = T
mk
u
k
g
m
= v
m
g
m
. (4.4.49)
Furthermore with the linear mapping,
w = Sv , with w V , and S V V

, (4.4.50)
and the linear mapping (4.4.48) the associative law for the linear mappings holds
w = Sv = S(T u) = (ST) u. (4.4.51)
The second linear mapping w = Sv in the mixed formulation with the contravariant base vectors
g
i
V is given by
S = S
i
j
g
i
g
j
. (4.4.52)
Then the vector w in index notation with the results of the equations (4.4.49), (4.4.51) and
(4.4.52) is rewritten as
w = S(Tu) =
_
S
i
j
g
i
g
j
_ _
T
mk
u
k
g
m
_
= S
i
j
T
mk
u
k

j
m
g
i
= S
i
j
T
jk
u
k
g
i
= w
i
g
i
,
and the coefcients of the vector are given by
w
i
= S
i
j
T
jk
u
k
. (4.4.53)
For the second order tensor product ST exists in general four representations with all possible
combinations of base vectors
S T = S
i
m
T
mk
g
i
g
k
covariant basis, (4.4.54)
S T = S
im
T
m
k
g
i
g
k
contravariant basis, (4.4.55)
S T = S
im
T
mk
g
i
g
k
mixed basis, (4.4.56)
and
S T = S
im
T
mk
g
i
g
k
mixed basis. (4.4.57)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
106 Chapter 4. Vector and Tensor Algebra
Lemma 4.4. The result of the tensor product of two dyads is the scalar product of the inner
vectors of the dyads and the dyadic product of the outer vectors of the dyads. The tensor product
of two dyads of vectors is denoted by
(a b) (c d) = (b c) a d. (4.4.58)
With this rule the index notation of equations (4.4.53) up to (4.4.57) is easily computed, for
example equation (4.4.53) or (4.4.54) implies
ST =
_
S
i
m
g
i
g
m
_ _
T
nk
g
n
g
k
_
= S
i
m
T
nk

m
n
g
i
g
k
,
and nally
ST = S
i
m
T
mk
g
i
g
k
. (4.4.59)
The "multiplication"or composition of two linear mappings S and T is called a tensor product
P = S T = ST (tensor tensor = tensor). (4.4.60)
The linear mappings w = Sv and v = Tu with the vectors u, v, and w V are composed like
w = Sv = S(Tu) = (ST) u = STu = Pu. (4.4.61)
This "multiplication"is, like in matrix calculus, but not like in "normal"algebra (a b = b a),
noncommutative, i.e.
ST ,= TS. (4.4.62)
For the three second order tensors R, S, T VV

and the scalar quantity R the following


identities for tensor products hold.
Multiplication by a scalar quantity:
(ST) = (S) T = S(T) (4.4.63)
Multiplication by the identity tensor:
1T = T1 = T (4.4.64)
Existence of a zero tensor:
0T = T0 = 0 (4.4.65)
Associative law for the tensor product:
(RS) T = R(ST) (4.4.66)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.4. Transformations and Products of Tensors 107
Distributive law for the tensor product:
(R+S) T = RT+ST (4.4.67)
In general cases NO commutative law:
ST ,= TS (4.4.68)
Transpose of a tensor product:
(ST)
T
= T
T
S
T
(4.4.69)
Inverse of a tensor product:
(ST)
1
= T
1
S
1
if S and T are nonsingular. (4.4.70)
Determinant of a tensor product:
det (ST) = det Sdet T (4.4.71)
Trace of a tensor product:
tr(ST) = tr(TS) (4.4.72)
Proof of equation (4.4.63).
(ST) = (S) T = S(T) ,
with the assumption
(S) v = (Sv) , with v V, (4.4.73)
with this
[(ST)] v = [S(Tv)] = (S) (Tv) = [(S) T] v,
and nally
(ST) = (S) T. (4.4.74)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
108 Chapter 4. Vector and Tensor Algebra
Proof of equation (4.4.64).
1T = T1 = T , with 1 V V,
with the assumption of equation (4.4.61),
S(Tv) = (ST) v, (4.4.75)
and the identity
1v = v, (4.4.76)
this implies
(1T) v = 1(Tv) = Tv,
(T1) v = T(1v) = Tv,
and nally
1T = T1 = T. (4.4.77)
Proof of equation (4.4.66).
(RS) T = R(ST) , with R, S, T V V,
with the assumption of equation (4.4.61) and inserting it into equation (4.4.66),
[(RS) T] v = (RS) (Tv) = (RS) w , with v, w V, (4.4.78)
with equation (4.4.61) again,
(RS) w = R(Sw) , and w = Tv,
[(RS) T] v = R[S(Tv)] = R[(ST) v] = R(Lv) = (RL) v
and nally
(RS) T = R(ST) . (4.4.79)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.4. Transformations and Products of Tensors 109
Proof of equation (4.4.67).
(R+S) T = RT+ST,
with the well known condition for a linear mapping,
(R+S) v = Rv +Sv, (4.4.80)
with this and equation (4.4.61),
[(R+S) T] v = (R+S) (Tv) = R(Tv) +S(Tv) = (RT) v + (ST) v,
and nally
(R+S) T = RT+ST. (4.4.81)
(4.4.82)
Proof of equation (4.4.69).
(ST)
T
= T
T
S
T
,
with the dention
_
S
T
_
T
= S, (4.4.83)
which implies
_
(ST)
T
_
T
= ST, (4.4.84)
and this equation only holds, if
_
(ST)
T
_
T
=
_
T
T
S
T
_
T
= ST. (4.4.85)
Proof of equation (4.4.70).
(ST)
1
= T
1
S
1
,
and if the inverses T
1
and S
1
exist, then
(ST) (ST)
1
= 1, (4.4.86)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
110 Chapter 4. Vector and Tensor Algebra
with equations (4.4.64) and (4.4.66),
S
1
_
(ST) (ST)
1

= S
1
1 = S
1
, (4.4.87)
and equation (4.4.61) implies
S
1
_
(ST) (ST)
1

=
_
S
1
(ST)

(ST)
1
, (4.4.88)
and
S
1
(ST) =
_
S
1
S
_
T = T, (4.4.89)
with equation (4.4.88) inserted in (4.4.89) and comparing with equation (4.4.87),
T(ST)
1
= S
1
, (4.4.90)
T
1
_
T(ST)
1

= T
1
S
1
, (4.4.91)
and with equation (4.4.61) and (4.4.90),
T
1
_
T(ST)
1

=
_
T
1
T
_
(ST)
1
= 1(ST)
1
, (4.4.92)
and nally comparing this with equation (4.4.91)
(ST)
1
= T
1
S
1
. (4.4.93)
4.4.4 The Scalar Product or Inner Product of Tensors
The scalar product of tensors is dened by
T : (v w) = vTw , with v, w V , and T V V

. (4.4.94)
For the three second order tensors R, S, T VV

and the scalar quantity R the following


identities for scalar products of tensors hold.
Commutative law for the scalar product of tensors:
S : T = T : S (4.4.95)
Distributive law for the scalar product of tensors:
T : (R+S) = T : R+T : S (4.4.96)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.4. Transformations and Products of Tensors 111
Multiplication by a scalar quantity:
(T) : S = T : (S) = (T : S) (4.4.97)
Existence of an additive identity:
T : S = 0 , and if T is arbitrary, then S = 0. (4.4.98)
Existence of a positive denite tensor:
T : T = tr
_
TT
T
_
_
> 0 , if T ,= 0
= 0 , iff T = 0
, i.e. T is positive denite. (4.4.99)
Absolute value or norm of a tensor:
[T[ =
_
tr (TT
T
) (4.4.100)
The Schwarz inequality:
[TS[ [T[ [S[ (4.4.101)
For the norms of tensors, like for the norms of vectors, the following identities hold,
[T[ = [[ [T[ , (4.4.102)
[T+S[ [T[ +[S[ . (4.4.103)
And as a rule of thumb:
Lemma 4.5. The result of the scalar product of two dyads is the scalar product of the rst vectors
of each dyad and the scalar product of the second vectors of each dyad. The scalar product of
two dyads of vectors is denoted by
(a b) : (c d) = (a c) (b d) . (4.4.104)
With this rule the index notation of equation (4.4.104) implies for example
S : T =
_
S
im
g
i
g
m
_
:
_
T
nk
g
n
g
k
_
= S
im
T
nk

n
i

k
m
and nally
S : T = S
nm
T
nm
. (4.4.105)
And for the other combinations of base vectors the results are
S : T = S
nm
T
nm
(4.4.106)
S : T = S
n
m
T
m
n
(4.4.107)
S : T = S
m
n
T
n
m
. (4.4.108)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
112 Chapter 4. Vector and Tensor Algebra
4.5 Special Tensors and Operators
4.5.1 The Determinant of a Tensor in Cartesian Coordinates
It is not absolutely correct to speak about the determinant of a tensor, because it is only the
determinant of the coefcients of the tensor in Cartesian coordinates
1
and not of the whole tensor
itself. For the different notations of a tensor with covariant, contravariant and mixed coefcients
the determinant is given by
det T = det
_
T
ij

= det [T
ij
] = det
_
T
i
j

= det
_
T
j
i

. (4.5.1)
Expanding the determinant of the coefcient matrix of a tensor T works just the same as for any
other matrix. For example the determinant could be described with the permutation symbol ,
like in equation (4.2.35)
det T = det [T
mn
] =

T
11
T
12
T
13
T
21
T
22
T
23
T
31
T
32
T
33

= T
1i
T
2j
T
3k

ijk
. (4.5.2)
Some important identities are given without a proof by
det (T) =
3
det T, (4.5.3)
det (TS) = det Tdet S, (4.5.4)
det T
T
= det T, (4.5.5)
(det Q)
2
= 1 , if Qis an orthogonal tensor, (4.5.6)
det T
1
= (det T)
1
if T
1
exists. (4.5.7)
4.5.2 The Trace of a Tensor
The inner product of a tensor T with the identity tensor 1 is called the trace of a tensor,
tr T = 1 : T = T : 1. (4.5.8)
The same statement written in index notation,
_
g
k
g
k
_
:
_
T
ij
g
i
g
j
_
= T
ij

ki

k
j
= T
k
k
, (4.5.9)
and in this way it is easy to see that the result is a scalar. For the dyadic product of two vectors
the trace is given by the scalar product of the two involved vectors,
tr (a b) = 1 : (a b) = a (1 b) = a b, (4.5.10)
1
To compute the determinant of a second order tensor in general coordinates is much more complicated, and this
is not part of this lecture/script, for any details see for example DE BOER [3].
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.5. Special Tensors and Operators 113
and in index notation,
_
g
k
g
k
_
:
_
a
i
g
i
b
j
g
j
_
= a
i
b
j

ki

k
j
= a
k
b
k
. (4.5.11)
The trace of a product of two tensors S and T is dened by
tr
_
ST
T
_
= S : T (4.5.12)
and easy to proof just writting it in index notation. Starting with this some more important
identities could be found,
tr T = tr T
T
, (4.5.13)
tr (ST) = tr (TS) , (4.5.14)
tr (RST) = tr (TRS) = tr (STR) , (4.5.15)
tr [T(R+S)] = tr (TR) + tr (TS) , (4.5.16)
tr [(S) T] = tr (ST) , (4.5.17)
T : T = tr
_
TT
T
_
_
> 0 , if T ,= 0,
= 0 , iff T = 0,
, i.e. the tensor T is positive denite (4.5.18)
[T[ =
_
tr (TT
T
) the absolute value of a tensor T, (4.5.19)
and nally the inequality
[S : T[ [S[ [T[ . (4.5.20)
4.5.3 The Volumetric and Deviator Tensor
Like for the symmetric and skew parts of a tensor there are also a lot of notations for the volumet-
ric and deviator parts of a tensor. The volumetric part of a tensor in the 3-dimensional Euclidean
vector space E
3
is dened by
T
V
= T
vol
=
1
n
(tr T) 1 , and T E
3
E
3
. (4.5.21)
It is important to notice that all diagonal components V
(i)(i)
are equal and all the other compo-
nents equals zero
V
ij
= 0 if i ,= j. (4.5.22)
The deviator part of a tensor is given by
T
D
= T
dev
= dev T = TT
vol
= TT
V
= T
1
n
(tr T) 1. (4.5.23)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
114 Chapter 4. Vector and Tensor Algebra
4.5.4 The Transpose of a Tensor
The transpose T
T
of a second order tensor T is dened by
w (T v) = v (T
T
w) , and v, w V , T V V. (4.5.24)
For a dyadic product of two vectors the transpose is assumed as
w [(a b) v] = v [(b a) w] , (4.5.25)
and
(a b)
T
= (b a) . (4.5.26)
The left-hand side of equation (4.5.25),
w [(a b) v] = (w a) (b v) , (4.5.27)
and the right-hand side of equation (4.5.25)
v [(b a) w] = (v b) (a w) = (a w) (v b) , (4.5.28)
are equal, q.e.d. For the transpose of a tensor the following identities hold,
(a b)
T
= (b a), (4.5.29)
(T
T
)
T
= T, (4.5.30)
1
T
= 1, (4.5.31)
(S +T)
T
= S
T
+T
T
, (4.5.32)
(T)
T
= T
T
, (4.5.33)
(S T)
T
= T
T
S
T
. (4.5.34)
The index notations w.r.t. to the different basis are given by
T = T
ij
g
i
g
j
T
T
= T
ij
g
j
g
i
= T
ji
g
i
g
j
, (4.5.35)
T = T
ij
g
i
g
j
T
T
= T
ij
g
j
g
i
= T
ji
g
i
g
j
, (4.5.36)
T = T
i
j
g
i
g
j
T
T
= T
i
j
g
j
g
i
= T
j
i
g
i
g
j
, (4.5.37)
T = T
j
i
g
i
g
j
T
T
= T
j
i
g
j
g
i
= T
i
j
g
i
g
j
, (4.5.38)
and the relations between the tensor components,
_
T
ij
_
T
= T
ji
, or (T
ij
)
T
= T
ji
, (4.5.39)
and
_
T
i
j
_
T
= T
i
j
, or
_
T
j
i
_
T
= T
j
i
. (4.5.40)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.5. Special Tensors and Operators 115
4.5.5 The Symmetric and Antisymmetric (Skew) Tensor
There are a lot of different notations for the symmetric part of a tensor T, for example
T
S
= T
sym
= symT, (4.5.41)
and for the antisymmetric or skew part of a tensor T
T
A
= T
asym
= skewT. (4.5.42)
A second rank tensor is said to be symmetric, if and only if
T = T
T
. (4.5.43)
And a second rank tensor is said to be antisymmetric or skew, if and only if
T = T
T
. (4.5.44)
The same statement in index notation
T
ij
= T
ji
, if T is symmetric, (4.5.45)
T
ij
= T
ji
, if T is antisymmetric. (4.5.46)
Any second rank tensor can be written as a sum of a symmetric tensor and an antisymmetric
tensor,
T = T
S
+T
A
= T
T
sym
+T
T
asym
= symT+ skewT =
1
2
(T+T
T
) +
1
2
(TT
T
). (4.5.47)
The symmetric part of a tensor is dened by
T
S
= T
sym
= symT =
1
2
(T+T
T
), (4.5.48)
and the antisymmetric (skew) part of a tensor is dened by
T
A
= T
asym
= skewT =
1
2
(TT
T
). (4.5.49)
4.5.6 The Inverse of a Tensor
The inverse of a tensor T exists, if for any two vectors v and w the expression,
w = Tv, (4.5.50)
could be transformed in
v = T
1
w. (4.5.51)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
116 Chapter 4. Vector and Tensor Algebra
Comparing these two equations gives
TT
1
= T
1
T = 1 , and
_
T
1
_
1
= T. (4.5.52)
The inverse of a tensor,
T
1
exists, if and only if det T ,= 0. (4.5.53)
The inverse of a product of a scalar and a tensor is dened by
(T)
1
=
1

T
1
, (4.5.54)
and the inverse of a product of two tensors is dened by
(ST)
1
= T
1
S
1
. (4.5.55)
4.5.7 The Orthogonal Tensor
An orthogonal tensor Qsatises
QQ
T
= Q
T
Q = 1 , i.e. Q
1
= Q
T
. (4.5.56)
From this it follows that the mapping w = Qv with ww = vv implies
wQv = vQ
T
w = vQ
1
w. (4.5.57)
The orthogonal mappings of two arbitrary vectors v and w is rewritten with the denition of the
transpose (4.5.24)
(Qw) (Qv) = wQ
T
Qv, (4.5.58)
and with the denition of the orthogonal tensor (4.5.56)
(Qw) (Qv) = w v. (4.5.59)
The scalar product of two vectors equals the scalar product of their orthogonal mappings. And
for the square value of a vector and its orthogonal mapping equation (4.5.59) denotes
(Qv)
2
= v
2
. (4.5.60)
Sometimes even equation (4.5.59) and not (4.5.56) is used as denition of an orthogonal tensor.
The orthogonal tensor Q describes a rotation. For the special case of the Cartesian basis the
components of the orthogonal tensor Qare given by the cosine of the rotation angle
Q = q
ik
e
i
e
k
; q
ik
= cos ((e
i
; e
k
)) and det Q = 1. (4.5.61)
If det Q = +1, then the tensor is called proper orthogonal tensor or a rotator.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.5. Special Tensors and Operators 117
4.5.8 The Polar Decomposition of a Tensor
The polar decomposition of an nonsingular second order tensor T is given by
T = RU , or T = VR , with T V V , and det T ,= 0. (4.5.62)
In the polar decomposition the tensor R = Q is chosen as an orthogonal tensor, i.e. R
T
= R
1
and det R = 1. In this case the tensors U and V are positive denite and symmetric tensors.
The tensor Uis named the right-hand Cauchy strain tensor and Vis named the left-hand Cauchy
strain tensor. Both describe the strains, e.g. if a ball (a circle) is deformed in an ellipsoid (an
ellipse), on the opposite R represents a rotation. The gure (4.6) implies the denition of the

R
_
F = T

V
P
P

dX
_
dx

z
_
dz
Figure 4.6: The polar decomposition.
vectors
dz = R dX, (4.5.63)
and
dx = V dz. (4.5.64)
The composition of this two linear mappings is given by
dx = V R dX = F dX, (4.5.65)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
118 Chapter 4. Vector and Tensor Algebra
The other possible way to describe the composition is with the vector d

z,
dx = R d

z, (4.5.66)
and
d

z = U dX, (4.5.67)
and nally
dx = R U dX = F dX, (4.5.68)
The composed tensor F is called the deformation gradient and its polar decomposition is given
by
T F = R U = V R. (4.5.69)
4.5.9 The Physical Components of a Tensor
In general a tensor w.r.t. the covariant basis g
i
is given by
T = T
ik
g
i
g
k
. (4.5.70)
The physical components

T
ik
are dened by
T =

T
ik
g
i
[g
i
[

g
k
[g
k
[
=

T
ik

g
(i)(i)

g
(k)(k)
g
i
g
k
, (4.5.71)

T
ik
= T
ik

g
(i)(i)

g
(k)(k)
. (4.5.72)
The stress tensor T = T
ik
g
i
g
k
is given w.r.t. to the bais g
i
. Then the associated stress vector
t
i
w.r.t. a point in the sectional area da
i
is dened by
t
i
=
df
i
da
(i)
; df
i
= t
i
da
(i)
, (4.5.73)
t
i
=
ik
g
k
; df
i
=
ik
g
k
da
(i)
, (4.5.74)
with the differential force df
i
. Furthermore the sectional area and its absolute value are given by
da
i
= da
(i)
g
(i)
, (4.5.75)
and
[da
i
[ = da
(i)

g
(i)

,
[da
i
[ = da
(i)
_
g
(i)(i)
. (4.5.76)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.5. Special Tensors and Operators 119
'
g
3

g
2

g
1
'
da
3

t
3
Figure 4.7: An example of the physical components of a second order tensor.
The denition of the physical stresses

ik
is given by
df
i
=

ik
g
k
[g
k
[

da
(i)

,
df
i
=

ik
g
k

g
(k)(k)
da
(i)
_
g
(i)(i)
. (4.5.77)
Comparing equations (4.5.74) and (4.5.77) implies
_

ik

ik
_
g
(i)(i)

g
(k)(k)
_
da
(i)
_
g
(i)(i)
= 1,
and nally the denition for the physical components of the stress tensor

ik
is given by

ik
=
ik

g
(k)(k)
_
g
(i)(i)
. (4.5.78)
4.5.10 The Isotropic Tensor
An isotropic tensor is a tensor, which has in every rotated coordinate system, if it is a Cartesian
coordinate system the same components. Every tensor with order zero, i.e. a scalar quantity, is
an isotropic tensor, but no rst order tensor, i.e. a vector, could be isotropic. The unique isotropic
second order tensor is the Kronecker delta, see section (4.1).
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
120 Chapter 4. Vector and Tensor Algebra
4.6 The Principal Axes of a Tensor
4.6.1 Introduction to the Problem
The computation of the . . .
invariants,
eigenvalues and eigenvectors,
vectors of associated directions, and
principal axes
is described for the problem of the principal axes of a stress tensor, also called the directions of
principal stress. The Cauchy stress tensor in Cartesian coordinates is given by
T = T
ik
e
i
e
k
. (4.6.1)
This stress tensor T is symmetric because of the equilibrium conditon of moments. With this
_
'

x
2
x
3
x
1

t
(n)
dA
(n)

t
(1)
dA
(1) `
t
(3)
dA
(3)

t
(2)
dA
(2)
_
'

e
2
e
3
e
1
Figure 4.8: Principal axis problem with Cartesian coordinates.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.6. The Principal Axes of a Tensor 121
condition the shear stresses in orthogonal sections are equal
T
ik
= T
ki
, and (4.6.2)
T = T
T
= T
ki
e
i
e
k
. (4.6.3)
The stress vector in the section surface, given by e
1
= e
1
, is dened by the linear mapping of
the normal unit vector e
1
with the tensor T
t
1
= T (e
1
) (4.6.4)
= T
ik
(e
i
e
k
)e
1
= T
ik

1
k
e
i
,
t
1
= T
1i
e
1
. (4.6.5)
The stress tensor T assigns the resulting stress vector t
(n)
to the direction of the normal vector
perpendicular to the section surface n. This linear mapping fullls the equilibrium conditions
t
(n)
= T n, (4.6.6)
with the normal vector
n = n
l
e
l
, (4.6.7)
n e
k
= n
l

k
l
= n
k
= cos
_
n, e
k
_
,
and the absolute value
[n[ = 1.
The stress vector in direction of n is computed by
t
(n)
=
_
T
ik
e
i
e
k
_
n
l
e
l
(4.6.8)
= T
ik
n
l
(e
k
e
l
) e
i
= T
ik
n
l

kl
e
i
,
t
(n)
= T
ik
n
k
e
i
. (4.6.9)
The action of the tensor T on the normal vector n reduces the order of the second order tensor
T (stress tensor) to a rst order tensor t
(n)
(i.e. the stress vector in direction of n).
Lemma 4.6. Principal axes problem Exists a direction n
o
in space, in such a way, that the
resulting stress vector t
(n0)
is oriented in this direction, i.e. that the vector n
0
fullls the following
equations?
t
(n0)
= n
0
(4.6.10)
= 1 n
0
. (4.6.11)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
122 Chapter 4. Vector and Tensor Algebra
Comparing equations (4.6.6) and (4.6.11) leads to T n
0
= 1 n
0
and therefore to
(T1) n
0
= 0. (4.6.12)
For this special case of eigenvalue problem . . .
the directions n
0j
are called the principal stress directions, and they are given by the eigen-
vectors.
and the
j
=
j
are called the eigenvalues or resp. in this case the principal stresses.
4.6.2 Components in a Cartesian Basis
The equation (4.6.12) is rewritten with index notaion
_
T
ik
e
i
e
k

ik
e
i
e
k
_
n
l
0
e
l
= 0, (4.6.13)
_
T
ik

ik
_
n
l
0
(e
k
e
l
) e
i
= 0,
_
T
ik

ik
_
n
l
0

kl
e
i
= 0,
_
T
ik

ik
_
n
0k
e
i
= 0,
and nally
_
T
ik

ik
_
n
0k
= 0. (4.6.14)
This equation could be represented in matrix notation, because it is given in a Cartesian basis,
__
T
ik

ik
_
[n
0k
] = [0] , (4.6.15)
(T 1) n
0
= 0,
_
_
T
11
T
12
T
13
T
21
T
22
T
23
T
31
T
32
T
33

_
_

_
_
n
01
n
02
n
03
_
_
=
_
_
0
0
0
_
_
. (4.6.16)
This is a linear homogenous system of equations for n
01
, n
02
and n
03
, then non-trivial solutuions
exist, if and only if
det(T1) = 0, (4.6.17)
or in index notation
det(T
ik

ik
) = 0. (4.6.18)
4.6.3 Components in a General Basis
In a general basis with a stress tensor with covariant base vectors T =

T
ik
g
i
g
k
, a normal
vector n
0
= n
l
0
g
l
, and a unit tensor with covariant base vectors 1 = G = g
ik
g
i
g
k
, too the
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.6. The Principal Axes of a Tensor 123
equation (4.6.12) is denoted like this
_

T
ik
g
i
g
k
g
ik
g
i
g
k
_
n
l
0
g
l
= 0, (4.6.19)
_

T
ik
g
ik
_
n
l
0
(g
k
g
l
) g
i
= 0,
_

T
ik
g
ik
_
n
l
0
g
kl
g
i
= 0,
_

T
ik
g
ik
_
n
0k
g
i
= 0,
and nally in index notation
_

T
ik
g
ik
_
= 0,
det
_

T
ik
g
ik
_
= 0. (4.6.20)
And the same in mixed formulation is given by
_
T
i
k

i
k
_
n
k
0
= 0,
det
_
T
i
k

i
k
_
= 0. (4.6.21)
4.6.4 Characteristic Polynomial and Invariants
The characteristic polynomial of an eigenvalue problem with the invariants I
1
, I
2
, and I
3
in a
3-dimensional space E
3
is dened by
f () = I
3
I
2
+
2
I
1

3
= 0. (4.6.22)
For a Cartesian basis the equation (4.6.22) becomes a cubic equation, because of being an eigen-
value problem in E
3
with the invariants given by
I
1
= tr T = g
ik

T
ik
=
ik
T
ik
= T
k
k
= T
kk
, (4.6.23)
I
2
=
1
2
_
(tr T)
2
tr (T)
2

=
1
2
_
T
ii
T
kk
T
ik
T
ki

, (4.6.24)
I
3
= det T = det
_
T
ik
_
. (4.6.25)
The fundamental theorem of algebra implies that there exists three roots
1
,
2
, and
3
, such that
the following equations hold
I
1
=
1
+
2
+
3
, (4.6.26)
I
2
=
1

2
+
2

3
+
3

1
, (4.6.27)
I
3
=
1

3
. (4.6.28)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
124 Chapter 4. Vector and Tensor Algebra
4.6.5 Principal Axes and Eigenvalues of Symmetric Tensors
The assumption that the tensor is symmetric is denoted in tensor and index notation like this
T
T
= T, T
ik
= T
ki
and resp. in matrix notation T
T
= T. The eigenvalue problem with

j
,=
l
is given in matrix notation by
(T 1) n
0
= 0, (4.6.29)
in matrix notation for an an arbitrary
j
+n
T
0l
[ (T
j
1) n
0j
= 0, (4.6.30)
and with an arbitrary
l
n
T
0j
[ (T
l
1) n
0l
= 0. (4.6.31)
The addition of the last two equations leads to the relation,
n
T
0l
T n
0j

j
n
T
0l
n
0j
n
T
0j
T n
0l
+
l
n
T
0j
n
0l
= 0. (4.6.32)
A quadratic form implies
n
T
0j
T n
0l
=
_
n
T
0j
T n
0l
_
T
= n
T
0l
T
T
n
0j
= n
T
0l
T n
0j
, (4.6.33)
i.e.
(
l

j
) n
T
0j
n
0l
0. (4.6.34)
This equation holds, if and only if
l

j
,= 0 and n
T
0j
n
0l
= 0. The conclusion of this relation
between the eigenvalues
j
,
l
, and the normal unit vectors n
0j
, and n
0l
is, that the normal vectors
are orthogonal to each other.
4.6.6 Real Eigenvalues of a Symmetric Tensors
Two complex conjugate eigenvalues are denoted by

j
= + i, (4.6.35)

l
= i. (4.6.36)
The coordinates of the associated eigenvectors n
0j
and n
0l
could be written as column matrices
n
0j
and n
0l
. Furthermore the relations n
0j
= b + ic and n
0l
= b + ic hold. Comparing this with
the equation (4.6.34) implies
(
l

j
) n
T
0j
n
0l
= 0,
2i (b + ic)
T
(b ic) = 0,
2i
_
b
T
b + c
T
c
_
= 0, (4.6.37)
b
T
b + c
T
c ,= 0, (4.6.38)
i.e. = 0 and the eigenvalues are real numbers. The result is a symmetric stress tensor with
three real principale stresses and the associated directions being orthogonal to each other.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.6. The Principal Axes of a Tensor 125
4.6.7 Example
Compute the characteristic polynomial, the eigenvalues and the eigenvectors of this matrix
_
_
1 4 8
4 7 4
8 4 1
_
_
.
By expanding the determinant det (A 1) the characteristic polynomial becomes
p () = + 9
2
+ 81 729,
with the (real) eigenvalues

1
= 9,
2,3
= 9.
For this eigenvalues the orthogonal eigenvectors are established by
x
1
=
_
_
2
1
2
_
_
, x
2
=
_
_
1
2
2
_
_
und x
3
=
_
_
2
2
1
_
_
.
4.6.8 The Eigenvalue Problem in a General Basis
Let T be an arbitrary tensor, given in the basis g
i
with i = 1, . . . , n, and dened by
T =

T
ik
g
i
g
k
. (4.6.39)
The identity tensor in Cartesian coordinates 1 =
ik
e
i
e
k
is substituted by the identity tensor
1, dened by
1 = g
ik
g
i
g
k
. (4.6.40)
Then the eigenvalue problem
(T1) n
0
= 0 (4.6.41)
is substituted by the eigenvalue problem in general coordinates given by
_

T
ik
g
i
g
k
g
ik
g
i
g
k
_
n
l
0
g
l
= 0, (4.6.42)
with the vector n
0
= n
l
0
g
l
in the direction of a principal axis. Begining with the eigenvalue
problem
_

T
ik
g
ik
_
n
l
0
(g
k
g
l
) g
i
= 0
_

T
ik
g
ik
_
n
l
0
g
kl
g
i
= 0
_

T
ik
g
ik
_
n
0k
g
i
= 0, (4.6.43)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
126 Chapter 4. Vector and Tensor Algebra
and with
n
0k
,= 0, (4.6.44)
results nally the condition
_

T
ik
g
ik
_
n
0k
= 0. (4.6.45)
The tensor T could be written in the mixed notation,
T =

T
i
k
g
i
g
k
, (4.6.46)
and with the identity tensor also in mixed notation,
1 =
i
k
g
i
g
k
. (4.6.47)
The eigenvalue problem
_
T
i
k

i
k
_
n
l
0
_
g
k
g
l
_
g
i
= 0
_
T
i
k

i
k
_
n
l
0

k
l
g
i
= 0, (4.6.48)
implies the condition
_
T
i
k

i
k
_
n
0k
= 0. (4.6.49)
But the matrix [T
i
k
] =

T is nonsymmetric. For this reason it is necessary to control the orthogo-


nality of the eigenvectors by a decomposition.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.7. Higher Order Tensors 127
4.7 Higher Order Tensors
4.7.1 Review on Second Order Tensor
A complete second order tensor T maps a vector u for example in the vector space V, like this
v = Tu , with u, v V , and T V V. (4.7.1)
For example in index notation with a vector basis g
i
V, a vector is given by
u = u
i
g
i
= u
i
g
i
, (4.7.2)
and a second order tensor by
T = T
jk
g
j
g
k
, with g
j
, g
k
V. (4.7.3)
Than a linear mapping with a second order tensor is given by
v = T
jk
(g
j
g
k
) u
i
g
i
(4.7.4)
= T
jk
u
i
(g
k
g
i
) g
j
= T
jk
u
i
g
ki
g
j
,
v = T
j
i
u
i
g
j
= v
j
g
j
. (4.7.5)
4.7.2 Introduction of a Third Order Tensor
After having a close look at a second order tensor, and realizing that a vector is nothing else but
a rst order tensor, it is easy to understand, that there might be also a higher order tensor. In the
same way like a second order tensor maps a vector onto another vector, a complete third order
tensor maps a vector onto a second order tensor. For example in index notation with a vector
basis g
i
V, a vector is given
u = u
i
g
i
= u
i
g
i
, (4.7.6)
and a complete third order tensor by
3
A = A
jkl
g
j
g
k
g
l
, with g
j
, g
k
, g
l
V. (4.7.7)
Than a linear mapping with a third order tensor is given by
T = A
jkl
(g
j
g
k
g
l
) u
i
g
i
(4.7.8)
= A
jkl
u
i
(g
l
g
i
) (g
j
g
k
)
= A
jkl
u
i
g
li
(g
j
g
k
) ,
T = A
jkl
u
l
(g
j
g
k
) = T
jk
(g
j
g
k
) . (4.7.9)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
128 Chapter 4. Vector and Tensor Algebra
4.7.3 The Complete Permutation Tensor
The most important application of a third order tensor is the third order permutation tensor, which
is the correct description of the permutation symbol in section (4.2). The complete permutation
tensor is antisymmetric and in the space E
3
just represented by a scalar quantity (positive or
negative), see also the permutation symbols e and in section (4.2). The complete permutation
tensor in Cartesian coordinates or also called the third order fundamental tensor is dened by
3
E = e
ijk
e
i
e
j
e
k
, or
3
E = e
ijk
e
i
e
j
e
k
. (4.7.10)
For the orthonormal basis e
i
the components of the covariant e
ijk
and contravariant e
ijk
permu-
tation tensor (symbol) are equal
e
i
e
j
= e
ijk
e
k
. (4.7.11)
Equation (4.2.26) in section (4.2) is the short form of a product in Cartesian coordinates be-
tween the third order tensor
3
E and the second order identity tensor. This scalar product of the
permutation tensor and e
i
e
j
from the right-hand side yields,
e
i
e
j
=
_
e
rsk
e
r
e
s
e
k
_
: (e
i
e
j
)
= e
rsk

s
i

k
j
e
r
= e
rij
e
r
,
e
i
e
j
= e
ijr
e
r
, (4.7.12)
or with e
j
e
i
from the left-hand side yields
e
i
e
j
= (e
i
e
j
) :
_
e
rsk
e
r
e
s
e
k
_
= e
rsk

r
i

s
j
e
k
,
e
i
e
j
= e
ijk
e
k
. (4.7.13)
4.7.4 Introduction of a Fourth Order Tensor
The action of a fourth order tensor C, given by
C = C
ijkl
(g
i
g
j
g
k
g
l
) , (4.7.14)
on a second order tensor T given by
T = T
mn
(g
m
g
n
) , (4.7.15)
is given in index notation, see also equation (4.4.104), by
S = C : T =
_
C
ijkl
g
i
g
j
g
k
g
l
_
: (T
mn
g
m
g
n
) , (4.7.16)
= C
ijkl
T
mn
(g
k
g
m
) (g
l
g
n
) (g
i
g
j
) ,
= C
ijkl
T
mn
g
km
g
ln
(g
i
g
j
) = C
ijkl
T
kl
(g
i
g
j
) ,
S = S
ij
g
i
g
j
. (4.7.17)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
4.7. Higher Order Tensors 129
Really important is the so called elasticity tensor C used in elasticity theory. This is a fourth
order tensor, which maps the strain tensor onto the stress tensor ,
= C. (4.7.18)
Comparing this with the well known unidimensional Hookes law it is easy to see that this map-
ping is the generalized 3-dimensional linear case of Hookes law. The elasticity tensor C has in
general in space E
3
the total number of 3
4
= 81 components. Because of the symmetry of the
strain tensor and the stress tensor this number reduces to 36. With the potential character
of the elastic stored deformation energy the number of components reduces to 21. For an elastic
and isotropic material there is another reduction to 2 independent constants, e.g. the Youngs
modulus E and the Poissons ratio .
4.7.5 Tensors of Various Orders
Higher order tensor are represented with the dyadic products of vectors, e.g. a simple third order
tensor and a complete third order tensor,
3
B =
n

i=1
a
i
b
i
c
i
=
n

i=1
T
i
c
i
= B
ijk
g
i
g
j
g
k
, (4.7.19)
and a simple fourth order tensor and a complete fourth order tensor,
C =
n

i=1
a
i
b
i
c
i
d
i
=
n

i=1
S
i
T
i
= C
ijkl
g
i
g
j
g
k
g
l
. (4.7.20)
For example the tensors form order zero till order four are summarized in index notation with a
basis g
i
,
a scalar quantity, or a tensor of order zero =
(0)
, (4.7.21)
a vector, or a rst order tensor v =
(1)
v = v
i
g
i
, (4.7.22)
a second order tensor T =
(2)
T = T
jk
g
j
g
k
, (4.7.23)
a third order tensor
3
B = B
ij
k
g
i
g
j
g
k
, (4.7.24)
a fourth order tensor C = C = C
ijkl
g
i
g
j
g
k
g
l
. (4.7.25)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
130 Chapter 4. Vector and Tensor Algebra
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Chapter 5
Vector and Tensor Analysis
SIMMONDS [12], HALMOS [6], ABRAHAM, MARSDEN, and RATIU [1], and MATTHEWS [11].
And in german DE BOER [3], STEIN ET AL. [13], and IBEN [7].
131
132 Chapter 5. Vector and Tensor Analysis
Chapter Table of Contents
5.1 Vector and Tensor Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.1.1 Functions of a Scalar Variable . . . . . . . . . . . . . . . . . . . . . . 133
5.1.2 Functions of more than one Scalar Variable . . . . . . . . . . . . . . . 134
5.1.3 The Moving Trihedron of a Space Curve in Euclidean Space . . . . . . 135
5.1.4 Covariant Base Vectors of a Curved Surface in Euclidean Space . . . . 138
5.1.5 Curvilinear Coordinate Systems in the 3-dim. Euclidean Space . . . . . 139
5.1.6 The Natural Basis in the 3-dim. Euclidean Space . . . . . . . . . . . . 140
5.1.7 Derivatives of Base Vectors, Christoffel Symbols . . . . . . . . . . . . 141
5.2 Derivatives and Operators of Fields . . . . . . . . . . . . . . . . . . . . . . 143
5.2.1 Denitions and Examples . . . . . . . . . . . . . . . . . . . . . . . . 143
5.2.2 The Gradient or Frechet Derivative of Fields . . . . . . . . . . . . . . 143
5.2.3 Index Notation of Base Vectors . . . . . . . . . . . . . . . . . . . . . 144
5.2.4 The Derivatives of Base Vectors . . . . . . . . . . . . . . . . . . . . . 145
5.2.5 The Covariant Derivative . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.2.6 The Gradient in a 3-dim. Cartesian Basis of Euclidean Space . . . . . . 146
5.2.7 Divergence of Vector and Tensor Fields . . . . . . . . . . . . . . . . . 147
5.2.8 Index Notation of the Divergence of Vector Fields . . . . . . . . . . . 148
5.2.9 Index Notation of the Divergence of Tensor Fields . . . . . . . . . . . 148
5.2.10 The Divergence in a 3-dim. Cartesian Basis in Euclidean Space . . . . 149
5.2.11 Rotation or Curl of Vector and Tensor Fields . . . . . . . . . . . . . . 150
5.2.12 Laplacian of a Field . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
5.3 Integral Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.3.2 Gausss Theorem for a Vector Field . . . . . . . . . . . . . . . . . . . 155
5.3.3 Divergence Theorem for a Tensor Field . . . . . . . . . . . . . . . . . 156
5.3.4 Integral Theorem for a Scalar Field . . . . . . . . . . . . . . . . . . . 156
5.3.5 Integral of a Cross Product or Stokess Theorem . . . . . . . . . . . . 157
5.3.6 Another Interpretation of Gausss Theorem . . . . . . . . . . . . . . . 158
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.1. Vector and Tensor Derivatives 133
5.1 Vector and Tensor Derivatives
5.1.1 Functions of a Scalar Variable
A scalar function could be represented by another scalar quantity, a vector, or even a tensor,
which depends on one scalar variable. These different types of scalar functions are denoted by
=

() a scalar-valued scalar function, (5.1.1)
v = v () a vector-valued scalar function, (5.1.2)
and
T =

T() a tensor-valued scalar function, (5.1.3)
The usual derivative w.r.t. a scalar variable of the equation (5.1.1) is established with the
Taylor series of the scalar-valued scalar function

() at a value ,

( + ) =

() + () + O
_

2
_
. (5.1.4)
The term () given by
() = lim
0

( + )

()

, (5.1.5)
is the derivative of a scalar w.r.t. a scalar quantity. The usual representations of the derivatives
dd are given by
() =
d
d
=

= lim
0

( + )

()

. (5.1.6)
The Taylor series of the scalar-valued vector function, see equation (5.1.2), at a value is given
by
v ( + ) = v () +y () +O
_

2
_
. (5.1.7)
The derivative of a vector w.r.t. a scalar quantity is dened by
y () =
dv
d
= v

= lim
0
v ( + ) v ()

. (5.1.8)
The total differential or also called the exact differential of the vector function v () is given by
dv = v

d. (5.1.9)
The second derivative of the scalar-valued vector function v () at a value is given by
dy ()
d
=
d
d
_
dv
d
_
= v

. (5.1.10)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
134 Chapter 5. Vector and Tensor Analysis
The Taylor series of the scalar-valued tensor function, see equation (5.1.3), at a value is given
by

T( + ) =

T() +Y() +O
_

2
_
. (5.1.11)
This implies the derivative of a tensor w.r.t. a scalar quantity,
Y() =
dT
d
= T

= lim
0

T( + )

T()

. (5.1.12)
In the following some important identities are listed,
(v)

v + v

, (5.1.13)
(vw)

= v

w+vw

, (5.1.14)
(v w)

= v

w+v w

, (5.1.15)
(v w)

= v

w+v w

, (5.1.16)
(Tv)

= T

v +Tv

, (5.1.17)
(ST)

= S

T+ST

, (5.1.18)
_
T
1
_

= T
1
T

T
1
. (5.1.19)
As a short example for a proof of the above identities the proof of equation (5.1.19) is given by
TT
1
= 1,
_
TT
1
_

= T

T
1
+T
_
T
1
_

= 0,
T

T
1
= T
_
T
1
_

_
T
1
_

= T
1
T

T
1
.
5.1.2 Functions of more than one Scalar Variable
Like for the functions of one scalar varaible it is also possible to denite varoius functions of
more than one scalar variable, e.g.
=

(
1
,
2
, . . . ,
i
, . . . ,
n
) a scalar-valued function of multiple variables, (5.1.20)
v = v (
1
,
2
, . . . ,
i
, . . . ,
n
) a vector-valued function of multiple variables, (5.1.21)
and nally
T =

T(
1
,
2
, . . . ,
i
, . . . ,
n
) a tensor-valued function of multiple variables. (5.1.22)
In stead of establishing the total differentials like in the section before, it is now necessary to
establish the partial derivatives of the functions w.r.t. the various variables. Starting with the
scalar-valued function (5.1.20), the partial derivative w.r.t. the i-th scalar variable
i
is dened
by

i
=
,i
= lim
0

(
1
, . . . ,
i
+ , . . . ,
n
)

(
1
, . . . ,
i
, . . . ,
n
)

. (5.1.23)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.1. Vector and Tensor Derivatives 135
With this partial derivatives
,i
the exact differential of the function is given by
d =
,i
d
i
. (5.1.24)
The partial derivatives of the vector-valued function (5.1.21) w.r.t. the scalar variable
i
are
dened by
v

i
= v
,i
= lim
0
v (
1
, . . . ,
i
+ , . . . ,
n
) v (
1
, . . . ,
i
, . . . ,
n
)

, (5.1.25)
and its exact differential is given by
dv = v
,i
d
i
. (5.1.26)
The partial derivatives of the tensor-valued function (5.1.22) w.r.t. the scalar variable
i
are
dened by
T

i
= T
,i
= lim
0

T(
1
, . . . ,
i
+ , . . . ,
n
)

T(
1
, . . . ,
i
, . . . ,
n
)

, (5.1.27)
and the exact differential is given by
dT = T
,i
d
i
. (5.1.28)
5.1.3 The Moving Trihedron of a Space Curve in Euclidean Space
A vector function x = x(
1
) with one variable
1
in the Euclidean vector space E
3
could be
represented by a space curve. The vector x(
1
) is the position vector from the origin O to the
point P on the space curve. The tangent vector t (
1
) at a point P is then dened by
t
_

1
_
= x

1
_
=
dx(
1
)
d
1
. (5.1.29)
The tangent unit vector or just the tangent unit of the space curve at a point P with the position
vector x is dened by
t (s) =
x

1
s
=
dx
ds
, and [t (s)[ = 1. (5.1.30)
The normal vector at a point P on a space curve is dened with the derivative of the tangent
vector w.r.t. the curve parameter s by

n =
dt
ds
=
d
2
x
ds
2
. (5.1.31)
The term 1/ is a measure of the curvature or just the curvature of a space curve at a point P.
The normal vector

n at a point P is perpendicular to the tangent vector t at this point,

nt i.e.

n t = 0,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
136 Chapter 5. Vector and Tensor Analysis
_
'

e
3
e
2
e
1
O
P
Q
M
_
s = s (
1
)

t (
1
)

x(
1
)

x(
1
+
1
)

Figure 5.1: The tangent vector in a point P on a space curve.


and the curvature is given by
1

2
=
d
2
x
ds
2

d
2
x
ds
2
. (5.1.32)
The proof of this assumption starts with the scalar product of two tangent vectors,
t t = 1,
d
ds
(t t) = 0,
2
dt
ds
t = 0,
and nally results, that the scalar product of the derivative w.r.t. the curve parameter and the
tangent vector equals zero, i.e. this two vectors are perpendicular to each other,
dt
ds
t.
This implies the denition
1

of the curvature of a curve at a point P.


With the curvature 1 the normal unit vector n or just the normal unit is dened by
n =

n. (5.1.33)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.1. Vector and Tensor Derivatives 137

"

t
n
b
Figure 5.2: The moving trihedron.
The so called binormal unit vector b or just the binormal unit is the vector perpendicular to the
tangent vector t, and the normal vector n at a point P, and dened by
b = t n. (5.1.34)
The absolute value [b[ of the binormal unit is a measure for the torsion of the curve in space at a
point P, and the derivative of the binormal vector w.r.t. the curve parameter s implies,
db
ds
=
dt
ds
n +t
dn
ds
=

n n +t
dn
ds
= 0 +
1

n
db
ds
=
1

n. (5.1.35)
This yields the denition
1

of the torsion of a curve at a point P,


and with equation (5.1.35) the torsion is given by
1

=
2
_
dx
ds

d
2
x
ds
2
_

d
3
x
ds
3
. (5.1.36)
The three unit vectors t, n and b form the moving trihedron of a space curve in every point P.
The derivatives w.r.t. the curve parameter ds are the so called Serret-Frenet equations, and given
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
138 Chapter 5. Vector and Tensor Analysis
below,
dt
ds
=
1

n =
d
2
x
ds
2
, and b = t n, (5.1.37)
dn
ds
=
1

n b
1

t n =
1

t
1

b, (5.1.38)
db
ds
=
1

n =
d
ds
(t b) . (5.1.39)
5.1.4 Covariant Base Vectors of a Curved Surface in Euclidean Space
The vector-valued function x = x(
1
,
2
) with two scalar variables
1
, and
2
represents
a curved surface in the Euclidean vector space E
3
. The covariant base vectors of the curved
_
'

e
3
e
2
e
1
O
P

a
1
`
a
2

a
3

ds
Figure 5.3: The covariant base vectors of a curved surface.
surface are given by
a
1
=
x

1
, and a
2
=
x

2
. (5.1.40)
The metric coefcients of the curved surface are computed with the base vectors, and the follow-
ing predenition for the small greek letters,
a

= a

, and , = 1, 2. (5.1.41)
The normal unit vector of the curved surface, perpendicular to the vectors a
1
, and a
2
, is dened
by
n = a
3
=
a
1
a
2
[a
1
a
2
[
. (5.1.42)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.1. Vector and Tensor Derivatives 139
With this relation the other metric coefcients are given by
a
3
= 0 , and a
33
= 1, (5.1.43)
and nally the determinant of the metric coefcients is given by
a = det a
,
. (5.1.44)
The absolute value of a line element dx is computed by
ds
2
= dx x = x
,
d

x
,
d

,
= a

= a

,
and nally
ds =
_
a

. (5.1.45)
The differential element of area dA is given by
dA =

a d
1
d
2
. (5.1.46)
The contravariant base vectors of the curved surface are computed with the metric coefcients
and the covariant base vectors,
a

= a

, (5.1.47)
and the Kronecker delta is given by
a

. (5.1.48)
5.1.5 Curvilinear Coordinate Systems in the 3-dim. Euclidean Space
The position vector x in an orthonormal Cartesian coordinate system is given by
x = x
i
e
i
. (5.1.49)
The curvilinear coordinates or resp. the curvilinear coordinate system is introduced by the
following relations between the curvilinear coordinates
i
and the Cartesian coordinates x
j
and
base vectors e
j
,

i
=

i
_
x
1
, x
2
, x
3
_
. (5.1.50)
The inverses of this relations in the domain are explicit dened by
x
i
= x
i
_

1
,
2
,
3
_
, (5.1.51)
if the following conditions hold, . . .
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
140 Chapter 5. Vector and Tensor Analysis
_
'

e
3
e
2
e
1
O
x
3
x
2
x
1

3
P
Figure 5.4: Curvilinear coordinates in a Cartesian coordinate system.
the function is at least one-time continuous differentiable,
and the Jacobian or more precisely the determinant of the Jacobian matrix is not equal to
zero,
J = det
_
x
i

k
_
,= 0. (5.1.52)
The vector x w.r.t. the curvilinear coordinates is represented by
x = x
i
_

1
,
2
,
3
_
e
i
. (5.1.53)
5.1.6 The Natural Basis in the 3-dim. Euclidean Space
A basis in the point P represented by the position vector x and tangential to the curvilinear
coordinates
i
is introduced by
g
k
=
x
i
(
1
,
2
,
3
)

k
e
i
. (5.1.54)
These base vectors g
k
are the covariant base vectors of the natural basis and form the so called
natural basis. In general these base vectors are not perpendicular to each other. Furthermore
this basis g
k
changes along the curvilinear coordinates in every point, because it depends on the
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.1. Vector and Tensor Derivatives 141
_
'

e
3
e
2
e
1

"

g
1
g
2
g
3
O

3
P
Figure 5.5: The natural basis of a curvilinear coordinate system.
position vectors of the points along this curvilinear coordinate. The vector x w.r.t. the covariant
basis is given by
x = x
i
g
i
. (5.1.55)
For each covariant natural basis g
k
an associated contravariant basis with the contravariant base
vectors of the natural basis g
i
is dened by
g
k
g
i
=
i
k
. (5.1.56)
The vector x w.r.t. the contravariant basis is represented by
x =

x
i
g
i
. (5.1.57)
The covariant coordinates x
i
and the contravariant coordinates

x
i
of the position vector x are
connected by the metric coefcients like this,

x
i
= g
ik
x
k
, (5.1.58)
x
i
= g
ik

x
k
. (5.1.59)
5.1.7 Derivatives of Base Vectors, Christoffel Symbols
The derivative of a covariant base vector g
i
E
3
w.r.t. a coordinate
k
is again a vector, which
could be described by a linear combination of the base vectors g
1
, g
2
, and g
3
,
g
i

k
= g
i,k
!
=
s
ik
g
s
. (5.1.60)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
142 Chapter 5. Vector and Tensor Analysis
The
s
ik
are the components of the Christoffel symbol
(i)
. The Christoffel symbol could be
described by a second order tensor w.r.t. the basis g
i
,

(i)
=
s
ij
g
s
g
j
. (5.1.61)
With this denition of the Christoffel symbol as a second order tensor a linear mapping of the
base vector g
k
is given by
g
i,k
=
(i)
g
k
=
s
ij
_
g
s
g
j
_
g
k
(5.1.62)
=
s
ij
_
g
j
g
k
_
g
s
=
s
ij

j
k
g
s
,
and nally
g
i,k
=
s
ik
g
s
. (5.1.63)
Equation (5.1.63) is again the deniton of the Christoffel symbol, like in equation (5.1.60). With
this relation the components of the Christoffel symbol could be computed, like this
g
i,k
g
s
=
r
ik
g
r
g
s
=
r
ik

s
r
=
s
ik
. (5.1.64)
Like by any other second order tensor, the raising and lowering of the indices of the Christoffel
symbol is possible with the contravariant metric coefcients g
ls
, and with the covariant metric
coefcients g
ls
, e.g.

ikl
= g
ls

s
ik
. (5.1.65)
Also important are the relations between the derivatives of the metric coefcients w.r.t. to the
coordinates
i
and the components of the Christoffel symbol,

ikl
=
1
2
(g
kl,i
+ g
il,k
g
ik,l
) . (5.1.66)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.2. Derivatives and Operators of Fields 143
5.2 Derivatives and Operators of Fields
5.2.1 Denitions and Examples
A function of an Euclidean vector, for example of a vector of position x E
3
is called a eld.
The elds are seperated in three classes by their value,
= (x) the scalar-valued vector function or scalar eld, (5.2.1)
v = v (x) the vector-valued vector function or vector eld, (5.2.2)
and
T =

T(x) the tensor-valued vector function or tensor eld. (5.2.3)
For example some frequently used elds in the Euclidean vector space are,
scalar elds - temperature eld, pressure eld, density eld,
vector elds - velocity eld, acceleration eld,
tensor elds - stress state in a volume element.
5.2.2 The Gradient or Frechet Derivative of Fields
A vector-valued vector function or vector eld v = v (x) is differentiable at a point P repre-
sented by a vector of position x, if the following linear mapping exists
v (x +y) = v (x) +L(x) y +O
_
y
2
_
, and [y[ 0, (5.2.4)
L(x) 1 = lim
|y|0
v (x +y) v (x)
[y[
. (5.2.5)
The linear mapping L(x) is called the gradient or the Frechet derivative
L(x) = grad v (x) . (5.2.6)
The gradient grad v (x) of a vector-valued vector function (vector eld) is a tensor-valued func-
tion (second order tensor depending on the vector of position x). For a scalar-valued vector
function or a scalar eld (x) there exists an analogue to equation (5.2.4), resp. (5.2.5),
(x +y) = (x) +l (x) y + O
_
y
2
_
, and [y[ 0, (5.2.7)
l (x) 1 = lim
|y|0
(x +y) (x)
[y[
. (5.2.8)
And with this relation the gradient of the scalar eld is a vector-valued vector function (vector
eld) given by
l (x) = grad (x) . (5.2.9)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
144 Chapter 5. Vector and Tensor Analysis
Finally for a tensor-valued vector function or a tensor eld

T(x) the relations analogue to equa-
tion (5.2.4), resp. (5.2.5), are given by

T(x +y) =

T(x) +
3
L(x) y +
3
O
_
y
2
_
, and [y[ 0, (5.2.10)
3
L(x) 1 = lim
|y|0

T(x +y)

T(x)
[y[
. (5.2.11)
The gradient of the second order tensor eld is a third order tensor-valued vector function (third
order tensor eld) given by
3
L(x) = grad

T(x) . (5.2.12)
The gradient of a second order tensor grad (v w) or grad T is a third order tensor, because of
(grad v) w being a dyadic product of a second order tensor and a vector ("rst order tensor")
is a third order tensor. For arbitrary scalar elds , R, vector elds v, w V, and tensor
eld T V V the following identities hold,
grad ( ) = grad + grad , (5.2.13)
grad (v) = v grad + grad v, (5.2.14)
grad (T) = Tgrad + grad T, (5.2.15)
grad (v w) = (grad v)
T
w+ (grad w)
T
v, (5.2.16)
grad (v w) = v grad w+ grad v w, (5.2.17)
grad (v w) = [(grad v) w] grad w. (5.2.18)
It is important to notice, that the gradient of a vector of position is the identity tensor,
grad x = 1. (5.2.19)
5.2.3 Index Notation of Base Vectors
Most of the up now discussed relations hold for all n-dimensional Euclidean vector spaces E
n
,
but most uses are in continuum mechanics and in the 3-dimensional Euclidean vector space E
3
.
The scalar-valued, vector-valued or tensor-valued functions depend on a vector x V, e.g. a
vector of position at a point P. In the sections below the following basis are used, the curvilinear
coordinates
i
with the covariant base vectors,
g
i
=
x

i
= x
,i
, (5.2.20)
and the Cartesian coordinates x
i
= x
i
with the orthonormal basis
e
i
= e
i
. (5.2.21)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.2. Derivatives and Operators of Fields 145
5.2.4 The Derivatives of Base Vectors
In section (5.1) the , resp. the partial derivatives of base vectors g
i
w.r.t. the coordinates
k
were introduced by
g
i

k
= g
i,k
=
s
ik
g
s
. (5.2.22)
With the Christoffel symbols dened by equations (5.1.60) and (5.1.61),

(i)
=
s
ij
g
s
g
j
, (5.2.23)
the derivatives of base vectors are rewritten,
g
i,k
=
(i)
g
k
. (5.2.24)
The denition of the gradient, equation (5.2.1), compared with equation (5.2.23) shows, that the
Christoffel symbols are computed by

(i)
= grad g
i
. (5.2.25)
Proof. The proof of this relation between the gradient of the base vectors and the Christoffel
symbols,
grad g
i
=
(i)
= g
i,j
g
j
, (5.2.26)
is given by
g
i,k
=
(i)
g
k
=
_
g
i,j
g
j
_
g
k
=
_
g
j
g
k
_
g
i,j
=
j
k
g
i,j
g
i,k
= g
i,k
.
Finally the gradient of a base vector is represented in index notation by
grad g
i
= g
i,j
g
j
=
s
ij
g
s
g
j
. (5.2.27)
5.2.5 The Covariant Derivative
Let v = v (x) = v
i
g
i
be a vector eld, see equation (5.2.14), then the gradient of the vector eld
is given by
grad v = grad v (x) = grad
_
v
i
g
i
_
= g
i
grad v
i
+ v
i
grad g
i
. (5.2.28)
The gradient of a scalar-valued vector function (x) is dened by
grad =

i
g
i
=
,i
g
i
, (5.2.29)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
146 Chapter 5. Vector and Tensor Analysis
than the gradient of the contravariant coefcients v
i
(x) in the rst term of equation (5.2.28) is
given by
grad v
i
= v
i
,k
g
k
. (5.2.30)
Equation (5.2.30) in (5.2.28), and together with equation (5.2.25) the complete gradient of a
vector eld could be given by
grad v = g
i
v
i
,k
g
k
+ v
i

i
, (5.2.31)
and nally with equation (5.2.23),
grad v = v
i
,k
g
i
g
k
+ v
i

s
ik
g
s
g
k
. (5.2.32)
The dummy indices i and s are changed like this,
i s , and s i.
Rewritting equation (5.2.32), and factor out the dyadic product, implies
grad v =
_
v
i
,k
+ v
s

i
sk
_ _
g
i
g
k
_
. (5.2.33)
The term
v
i
[
k
= v
i
,k
+ v
s

i
sk
, (5.2.34)
is called the covariant derivative of the coefcient v
i
w.r.t. the coordinate
k
and the basis g
i
.
Than the gradient of a vector eld is given by
grad v = v
i
[
k
_
g
i
g
k
_
, with v
i
[
k
= v
i
,k
+ v
s

i
sk
. (5.2.35)
5.2.6 The Gradient in a 3-dim. Cartesian Basis of Euclidean Space
Let be a scalar eld in a space with the Cartesian basis e
i
,
= (x) , with x = x
i
e
i
= x
i
e
i
= x
i
e
i
, (5.2.36)
than the gradient of the scalar eld in a Cartesian basis is given by
grad =
,i
e
i
, and
,i
=

x
i
. (5.2.37)
The nabla operator is introduced by
= (. . .)
,i
e
i
=
(. . .)
x
1
e
1
+
(. . .)
x
2
e
2
+
(. . .)
x
3
e
3
,
and nally dened by
=
( )
x
i
e
i
. (5.2.38)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.2. Derivatives and Operators of Fields 147
This denition implies another notation for the gradient in a 3-dimensional Cartesian basis of the
Euclidean vector space,
grad = . (5.2.39)
Let v = v (x) be a vector eld in a space with the Cartesian basis e
i
,
v = v (x) = v
i
e
i
= v
i
e
i
= v
i
e
i
, with v
i
= v
i
(x
1
, x
2
, x
3
) , (5.2.40)
than the gradient of the vector eld in a Cartesian basis is given with the relation of equation
(5.2.14) by
grad v = grad v (x) = grad (v
i
e
i
) = e
i
grad v
i
+ v
i
grad e
i
. (5.2.41)
Computing the second term of equation (5.2.41) implies, that all derivatives of base vectors w.r.t.
the vector of position x, see the denition (5.2.38), are equal to zero,
grad e
i
= (e
i
)
,j
e
j
=
(e
i
)
x
1
e
1
+
(e
i
)
x
2
e
2
+
(e
i
)
x
3
e
3
=
( )
x
i
e
i
= 0. (5.2.42)
Than equation (5.2.41) simplies to
grad v = e
i
grad v
i
+0 = e
i
v
i,k
e
k
, (5.2.43)
and nally the gradient of a vector eld in a 3-dimensional Cartesian basis of the Euclidean
vector space is given by
grad v = v
i,k
(e
i
e
k
) . (5.2.44)
5.2.7 Divergence of Vector and Tensor Fields
The divergence of a vector eld is dened by
div v = tr (grad v) = grad v : 1, (5.2.45)
and must be a scalar quantity, because the gradient of a vector is a second order tensor, and the
trace of a second order denes a scalar quantity. The divergence of a tensor eld is dened by
div T = grad (T) : 1 , and T V V (5.2.46)
and must be a vector-valued quantity, because the scalar product of the second order unit tensor
1 and the third order tensor grad (T) is a vector-valued quantity . Another possible denition is
given by
a div T = div
_
T
T
a
_
= grad
_
T
T
a
_
: 1. (5.2.47)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
148 Chapter 5. Vector and Tensor Analysis
For an arbitrary scalar eld R, and arbitrary vector elds v, w V, and arbitrary tensor
elds S, T V V the following identities hold,
div (v) = v grad + div v, (5.2.48)
div (T) = Tgrad + div T, (5.2.49)
div (grad v)
T
= grad (div v) , (5.2.50)
div (v w) = (grad v w) : 1 (grad wv) : 1 (5.2.51)
= w rot v v rot w,
div (v w) = (grad v) w+ (div w) v, (5.2.52)
div (Tv) =
_
div T
T
_
v +T
T
: grad v, (5.2.53)
div (v T) = v div T+ grad v T, (5.2.54)
div (TS) = (grad T) S +Tdiv S. (5.2.55)
5.2.8 Index Notation of the Divergence of Vector Fields
Let v = v (x) = v
i
g
i
V be a vector eld, with v
i
= v
i
(
1
,
2
,
3
, . . . ,
n
), than a basis is
given by
g
i
= x
,i
=
x

i
. (5.2.56)
The deniton of the divergence (5.2.45) of a vector eld with using the index notation of the
gradient (5.2.35) implies
div v = grad v : 1 =
_
v
i
[
k
_
g
i
g
k
_
: [
r
s
(g
r
g
s
)]
= v
i
[
k

r
s
g
ir
g
ks
= v
i
[
k
g
is
g
ks
= v
i
[
k

k
i
,
div v = v
i
[
i
. (5.2.57)
The divergence of a vector eld is a scalar quantity and an invariant.
5.2.9 Index Notation of the Divergence of Tensor Fields
Let T be a tensor, given by
T =

T(x) = T
ik
(g
i
g
k
) V V, (5.2.58)
and
T
ik
=

T
ik
_

1
,
2
,
3
, . . . ,
n
_
, (5.2.59)
and with equation (5.2.47) the divergence of this tensor is given by
div T = grad T : 1 , and 1 =
r
s
g
r
g
s
. (5.2.60)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.2. Derivatives and Operators of Fields 149
The divergence of a second order tensor is a vector, see also equation (5.2.52),
div T = div
_
T
ik
g
i
g
k
_
=
_
grad
_
T
ik
g
i
_
g
k
+ [div g
k
] T
ik
g
i
, (5.2.61)
and with equation (5.2.14),
div T =
_
g
i
grad T
ik
+ T
ik
grad g
i

g
k
+ grad g
k
1T
ik
g
i
=
_
g
i
T
ik
,j
g
j
+ T
ik

s
ij
g
s
g
j

g
k
+
_

s
kj
g
s
g
j
_
:
r
l
_
g
l
g
r
_
T
ik
g
i
= g
i
T
ik
,j

j
k
+ T
ik

s
ij
g
s

j
k
+
s
kj

r
l

l
s

r
j
T
ik
g
i
= T
ik
,k
g
i
+ T
ik

s
ik
g
s
+
j
kj
T
ik
g
i
,
and nally after renaming the dummy indices,
div T =
_
T
kl
,l
+ T
lm

k
lm
+ T
km

l
ml
_
g
k
. (5.2.62)
The term T
kl
[
l
dened by
T
kl
[
l
= T
kl
,l
+ T
lm

k
lm
+ T
km

l
ml
, (5.2.63)
is the so called covariant derivative of the tensor coefcients w.r.t. the coordinates
l
, than the
divergence is given by
div T = T
kl
[
l
g
k
. (5.2.64)
Other representations are possible, e.g. a mixed formulation is given by
div T = T
k
l
[
k
g
l
, (5.2.65)
and with the covariant derivative T
k
l
[
k
,
T
k
l
[
k
=
_
T
l
l,k
T
k
n

n
lk
+ T
n
l

k
kn
_
(5.2.66)
5.2.10 The Divergence in a 3-dim. Cartesian Basis in Euclidean Space
A vector eld v in a Cartesian basis e
i
E
3
is represented by equation (5.2.40) and its gradient
by equation (5.2.41). The divergence dened by (5.2.45) rewritten with using the denition of
the gradient of a vector eld in a Cartesian basis (5.2.44) is given by
div v = grad v : 1 = (v
i,k
e
i
e
k
) : (
rs
e
r
e
s
)
= v
i,k

rs

ir

ks
= v
i,k

ik
div v = v
i,i
The divergence of a vector eld in the 3-dimensional Euclidean space with a Cartesian basis E
3
is a scalar invariant, and is given by
div v = v
i,i
, (5.2.67)
or in its complete description by
div v =
v
1
x
1
+
v
2
x
2
+
v
3
x
3
. (5.2.68)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
150 Chapter 5. Vector and Tensor Analysis
5.2.11 Rotation or Curl of Vector and Tensor Fields
The rotation of a vector eld v (x) is dened with the fundamental tensor
3
E by
rot v =
3
E(grad v)
T
. (5.2.69)
In English textbooks in most cases the curl operator curl instead of the rotation operator rot is
used,
rot v = curl v. (5.2.70)
The rotation, resp. curl, of a vector eld rot v (x) or curl v (x) is a unique vector eld. Some-
times another denition of the rotation of a vector eld is given by
rot v = div (1 v) = 1 grad v. (5.2.71)
For an arbitrary scalar eld R, and arbitrary vector elds v, w V, and an arbitrary tensor
eld T V V the following identities hold,
rot grad = 0, (5.2.72)
div rot v = 0, (5.2.73)
rot grad v = 0, (5.2.74)
rot (grad v)
T
= grad rot v, (5.2.75)
rot (v) = rot v + grad v, (5.2.76)
rot (v w) = v div wgrad wv wdiv v + grad vw
= div (v wwv) , (5.2.77)
div rot T = rot div T
T
, (5.2.78)
div (rot T)
T
= 0, (5.2.79)
(rot rot T)
T
= rot rot T
T
, (5.2.80)
rot (1) = [rot (1)]
T
, (5.2.81)
rot (Tv) = rot T
T
v + (grad v)
T
T. (5.2.82)
Also important to notice is that, if the tensor eld T is symmetric, then the following identity
holds,
rot T : 1 = 0. (5.2.83)
5.2.12 Laplacian of a Field
The laplacian of a scalar eld or the Laplace operator of a scalar eld is dened by
= grad grad : 1. (5.2.84)
The laplacian of a scalar eld is a scalar quantity. The laplacian of a vector eld v is
dened by
v = (grad grad v) 1, (5.2.85)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.2. Derivatives and Operators of Fields 151
and v is a vector-valued quantity. The denition of the laplacian of a tensor eld T is given
by
T = (grad grad T) 1, (5.2.86)
and T is a tensor-valued quantity. For an arbitrary vector eld v V, and an arbitrary tensor
eld T V V the following identities hold,
div
_
grad v (grad v)
T
_
= v grad div v, (5.2.87)
rot rot v = grad div v v, (5.2.88)
tr T = tr T (5.2.89)
rot rot T = T+ grad div T+ (grad div T)
T
,
grad grad tr T+1[(tr T) div div T] . (5.2.90)
Finally, if the tensor eld T is symmetric and dened by T = S 1tr S, with the symmetric
part given by S, then the following identity holds,
rot rot T = S + grad div S + (grad div S)
T
1div div S. (5.2.91)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
152 Chapter 5. Vector and Tensor Analysis
5.3 Integral Theorems
5.3.1 Denitions
The surface integral of a tensor product of a vector eld u(x) and an area vector da should
be transformed into a volume integral. The volume element dV with the surface dA is given at
a point P by the position vector x in the Euclidean space E
3
. The surface dA of the volume
element dV is described by the six surface elements represented by the aera vectors da
1
, . . .,
and da
6
.
_
'

e
3
e
2
e
1
O

x
P

d
1
g
1

d
2
g
2

d
3
g
3

u
dV
dA
1

u
1
z
da
1
2
3
4

u
4
=
u
1
+
u1

1
d
1
5
6
Figure 5.6: The volume element dV with the surface dA.
da
1
= d
2
d
3
g
3
g
2
= d
2
d
3

gg
1
= da
4
, (5.3.1)
da
2
= d
1
d
3
g
1
g
3
= d
1
d
3

gg
2
= da
5
, (5.3.2)
and
da
3
= d
1
d
2
g
2
g
1
= d
1
d
2

gg
3
= da
6
. (5.3.3)
The volume of the volume element dV is given by the scalar triple product of the tangential
vectors g
i
associated to the curvilinear coordinates
i
at the point P,
dV = (g
1
g
2
) g
3
d
1
d
2
d
3
,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.3. Integral Theorems 153
resp.
dV =

gg
3
g
3
d
1
d
2
d
3
=

gd
1
d
2
d
3
. (5.3.4)
Let u
i
be dened as the mean value of the vector eld u in the area element i, for example for
i = 1 a Taylor series is derived like this,
u
1
= u +
u

2
d
2
2
+
u

3
d
3
2
. (5.3.5)
The Taylor series for the area element i = 4 is given by
u
4
= u
1
+
u
1

1
d
1
,
u
4
= u
1
+

_
u +
u

2
d
2
2
+
u

3
d
3
2
_

1
d
1
,
and nally
u
4
= u
1
+
u

1
d
1
+

2
u

2
d
2
2
d
1
+

2
u

3
d
3
2
d
1
. (5.3.6)
Considering only the linear terms for i = 4, 5, 6 implies
u
4
= u
1
+
u

1
d
1
, (5.3.7)
u
5
= u
2
+
u

2
d
2
, (5.3.8)
and
u
6
= u
3
+
u

3
d
3
. (5.3.9)
The surface integral is approximated by the sum of the six area elements,
_
dA
u da =
6

i=1
u
i
da
i
. (5.3.10)
This equation is rewritten with all six terms,
6

i=1
u
i
da
i
= u
1
da
1
+ u
2
da
2
+ u
3
da
3
+ u
4
da
4
+ u
5
da
5
+ u
6
da
6
,
inserting equations (5.3.1)-(5.3.3)
= ( u
1
u
4
) da
1
+ ( u
2
u
5
) da
2
+ ( u
3
u
6
) da
3
,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
154 Chapter 5. Vector and Tensor Analysis
and nally with equations (5.3.7)-(5.3.9),
6

i=1
u
i
da
i
=
u

1
d
1
da
1

2
d
2
da
2

3
d
3
da
3
. (5.3.11)
Equations (5.3.1)-(5.3.3) inserted in (5.3.11) implies
6

i=1
u
i
da
i
=
_
u

1
g
1
+
u

2
g
2
+
u

3
g
3
_

gd
1
d
2
d
3
,
with the summation convention i = 1, . . . , 3,
=
_
u

i
g
i
_
dV ,
and nally with the denition of the gradient
6

i=1
u
i
da
i
= grad u dV . (5.3.12)
Comparing this result with equation (5.3.10) yields
_
dA
u da =
6

i=1
u
i
da
i
=
_
u

i
g
i
_
dV = grad u dV . (5.3.13)
Equation (5.3.13) holds for every subvolume dV with the surface dA. If the terms of the sub-
Figure 5.7: The Volume, the surface and the subvolumes of a body.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.3. Integral Theorems 155
volumes are summed over, then the dyadic products of the inner surfaces vanish, because every
dyadic product appears twice. Every area vector da appears once with the normal direction n
and once with the opposite direction n. The vector eld u is by denition continuous, i.e. for
each of the two sides of an inner surface the value of the vector eld is equal. In order to solve
the whole problem it is only necessary to sum (to integrate) the whole outer surface dA with the
normal unit vector n. If the summation over all subvolumes dV with the surfaces da is rewritten
as an integral, like in equation (5.3.13), then for the whole volume and surface the following
relation holds,
nV

i=1
_
dA
u da =
_
A
u da =
_
V
grad u dV , (5.3.14)
and with da = dan
nV

i=1
_
dA
u n da =
_
A
u n da =
_
V
grad u dV . (5.3.15)
With this integral theorems it would be easy to develop integral theorems for scalar elds, vector
elds and tensor elds.
5.3.2 Gausss Theorem for a Vector Field
The Gausss theorem is dened by
_
A
u n da =
_
V
div u dV . (5.3.16)
Proof. Equation (5.3.15) is multiplied scalar with the unit tensor 1 from the left-hand side,
_
A
1 : u n da =
_
V
1 : grad u dV , (5.3.17)
with the mixed formulation of the unit tensor 1 = g
j
g
j
,
1 : u n =
_
g
j
g
j
_
:
_
u
k
g
k
n
i
g
i
_
= u
k
n
i
g
jk

j
i
= u
j
n
j
,
1 : u n = u n, (5.3.18)
and the scalar product of the unit tensor and the gradient of the vector eld
1 : grad u = tr (grad u) ,
1 : grad u = div u, (5.3.19)
Finally inserting equations (5.3.18) and (5.3.19) in (5.3.17) implies
_
A
1 : u n da =
_
A
u n da =
_
V
1 : grad u dV =
_
V
div u dV . (5.3.20)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
156 Chapter 5. Vector and Tensor Analysis
5.3.3 Divergence Theorem for a Tensor Field
The divergence theorem is dened by
_
A
T n da =
_
V
div T dV . (5.3.21)
Proof. If the vector a is constant, then this implies
a
_
A
T n da =
_
A
a T n da =
_
A
n T
T
a da =
_
A
_
T
T
a
_
n da. (5.3.22)
With T
T
a = u being a vector it is possible to use Gausss theorem (5.3.16), this implies
a
_
A
T n da =
_
A
_
T
T
a
_
n da
=
_
V
div
_
T
T
a
_
dV ,
with equation (5.2.47) and the vector a being constant,
div
_
T
T
a
_
= (div T) a +Tgrad a = div Ta + 0 = a div T,
a
_
_
_
A
T n da
_
V
div T dV
_
_
= 0,
and nally
_
A
T n da =
_
V
div T dV .
5.3.4 Integral Theorem for a Scalar Field
The integral theorem for a scalar eld is dened by
_
A
da =
_
A
n da =
_
V
grad dV . (5.3.23)
Proof. If the vector a is constant, then the following condition holds
a
_
A
n da =
_
A
a n da.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
5.3. Integral Theorems 157
It is possible to use Gausss theorem (5.3.16), because a is a vector, this implies
a
_
A
n da =
_
A
div (a) dV . (5.3.24)
Using the identity (5.2.48) and the vector a being constant yields
div (a) = a grad + div a = a grad + 0 = a grad . (5.3.25)
Inserting relation (5.3.25) in equation (5.3.24) implies
a
_
A
n da =
_
V
a grad dV ,
a
_
_
_
A
n da
_
V
grad dV
_
_
= 0,
and nally the identity
_
A
n da =
_
V
grad dV .
5.3.5 Integral of a Cross Product or Stokess Theorem
The Stokes theorem for the cross product of a vector eld u and its normal vector n is dened
by
_
A
n u da =
_
V
rot u dV .. (5.3.26)
Proof. Let the vector a be constant,
a
_
A
n u da =
_
A
a (n u) da =
_
V
(u a) n da,
with the cross product u a being a vector it is possible to use the Gausss theorem (5.3.16)
a
_
A
n u da =
_
V
div (u a) da. (5.3.27)
The identity (5.2.51) with the vector a being constant implies
div (u a) = a rot u urot a = a rot u 0 = a rot u, (5.3.28)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
158 Chapter 5. Vector and Tensor Analysis
inserting relation (5.3.28) in equation (5.3.27) yields
a
_
A
n u da =
_
V
a rot u dV ,
and nally
_
A
n u da =
_
V
rot u dV .
5.3.6 Another Interpretation of Gausss Theorem
The Gausss theorem, see equation (5.3.16), could be established by inverting the denition of
the divergence. Let u(x) be a continuous and differentiable vector eld. The volume integral is
approximated by a limit, see also (5.3.10), where the whole surface is approximated by surface
elements da
i
,
_
V
u(x) dV = lim
Vi0

i
u
i
V
i
. (5.3.29)
Let u
i
be the mean value in a subvolume V
i
. The volume integral of the divergence of the
vector eld u with inserting the relation of equation (5.3.29) is given by
_
V
div u dV = lim
Vi0

i
_

div u
i
_
V
i
. (5.3.30)
The divergence (source density) is dened by
div u = lim
V 0
_
_
_
a
u da
V
_
_
. (5.3.31)
The mean value u
i
in equation (5.3.30) is replaced by the identity of equation (5.3.31),
_

div u
i
_
=
_
ai
u
i
da
V
i
,
_
V
div u dV = lim
Vi0

i
_
ai
u
i
da
V
i
.
and nally with the suammation of all subvolumes, like at the begining of this section,
_
V
div u dV =
_
A
u da =
_
A
u nda. (5.3.32)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Chapter 6
Exercises
159
160 Chapter 6. Exercises
Chapter Table of Contents
6.1 Application of Matrix Calculus on Bars and Plane Trusses . . . . . . . . . 162
6.1.1 A Simple Statically Determinate Plane Truss . . . . . . . . . . . . . . 162
6.1.2 A Simple Statically Indeterminate Plane Truss . . . . . . . . . . . . . 164
6.1.3 Basisc Relations for bars in a Local Coordinate System . . . . . . . . . 165
6.1.4 Basic Relations for bars in a Global Coordinate System . . . . . . . . . 167
6.1.5 Assembling the Global Stiffness Matrix . . . . . . . . . . . . . . . . . 168
6.1.6 Computing the Displacements . . . . . . . . . . . . . . . . . . . . . . 170
6.1.7 Computing the Forces in the bars . . . . . . . . . . . . . . . . . . . . 171
6.1.8 The Principle of Virtual Work . . . . . . . . . . . . . . . . . . . . . . 172
6.2 Calculating a Structure with the Eigenvalue Problem . . . . . . . . . . . . 174
6.2.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.2.2 The Equilibrium Conditions after the Excursion . . . . . . . . . . . . . 175
6.2.3 Transformation into a Special Eigenvalue Problem . . . . . . . . . . . 177
6.2.4 Solving the Special Eigenvalue Problem . . . . . . . . . . . . . . . . . 178
6.2.5 Orthogonal Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.2.6 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.3 Fundamentals of Tensors in Index Notation . . . . . . . . . . . . . . . . . . 182
6.3.1 The Coefcient Matrices of Tensors . . . . . . . . . . . . . . . . . . . 182
6.3.2 The Kronecker Delta and the Trace of a Matrix . . . . . . . . . . . . . 183
6.3.3 Raising and Lowering of an Index . . . . . . . . . . . . . . . . . . . . 184
6.3.4 Permutation Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
6.3.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.4 Various Products of Second Order Tensors . . . . . . . . . . . . . . . . . . 190
6.4.1 The Product of a Second Order Tensor and a Vector . . . . . . . . . . . 190
6.4.2 The Tensor Product of Two Second Order Tensors . . . . . . . . . . . 190
6.4.3 The Scalar Product of Two Second Order Tensors . . . . . . . . . . . . 190
6.4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
6.4.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
6.5 Deformation Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6.5.1 Tensors of the Tangent Mappings . . . . . . . . . . . . . . . . . . . . 194
6.5.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Chapter Table of Contents 161
6.5.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
6.6 The Moving Trihedron, Derivatives and Space Curves . . . . . . . . . . . . 198
6.6.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6.6.2 The Base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
6.6.3 The Curvature and the Torsion . . . . . . . . . . . . . . . . . . . . . . 201
6.6.4 The Christoffel Symbols . . . . . . . . . . . . . . . . . . . . . . . . . 202
6.6.5 Forces and Moments at an Arbitrary sectional area . . . . . . . . . . . 203
6.6.6 Forces and Moments for the Given Load . . . . . . . . . . . . . . . . . 207
6.7 Tensors, Stresses and Cylindrical Coordinates . . . . . . . . . . . . . . . . 210
6.7.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
6.7.2 Co- and Contravariant Base Vectors . . . . . . . . . . . . . . . . . . . 212
6.7.3 Coefcients of the Various Stress Tensors . . . . . . . . . . . . . . . . 213
6.7.4 Physical Components of the Contravariant Stress Tensor . . . . . . . . 215
6.7.5 Invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6.7.6 Principal Stress and Principal Directions . . . . . . . . . . . . . . . . . 221
6.7.7 Deformation Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
6.7.8 Normal and Shear Stress . . . . . . . . . . . . . . . . . . . . . . . . . 224
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
162 Chapter 6. Exercises
6.1 Application of Matrix Calculus on Bars and Plane Trusses
6.1.1 A Simple Statically Determinate Plane Truss
A very simple truss formed by two bars is given like in gure (6.1), and loaded by an arbitrary
force F in negative y-direction. The discrete values of the various quantities, like the Youngs
_
x
'
y
2
3 1
bar II, l, A
2
, E
2
bar I, l, A
1
, E
1
'
F

= 45
o
Figure 6.1: A simple statically determinate plane truss.
mdoulus E
i
or the sectional area A
i
, are of no further interest at the moment. Only the forces in
direction of the bars are to be computed. The equilibrium conditions of forces at the node 2 are
_
x
'
y

S
1
`
S
2
'
F

Figure 6.2: Free-body diagram for the node 2.
given in horizontal direction by

F
H
= 0 = S
2
sin + S
1
sin , (6.1.1)
and in vertical direction by

F
V
= 0 = F + S
2
cos + S
1
cos , (6.1.2)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.1. Application of Matrix Calculus on Bars and Plane Trusses 163
see also free-body diagram (6.2). This relations imply
S
1
= S
2
, and S
1
+ S
2
=
F
cos
, (6.1.3)
and nally
S
1
= S
2
=
F
2 cos
. (6.1.4)
The equilibrium conditions of forces at the node 1 are given in horizontal direction by
_
x
'
y
3
_
F
3x
'F
3y

S
2
1
_
F
1x
'F
1y

S
1

Figure 6.3: Free-body diagrams for the nodes 1 and 3.

F
H
= 0 = F
1x
S
1
cos , (6.1.5)
and in vertical direction by

F
V
= 0 = F
1y
S
1
sin , (6.1.6)
see also the right-hand side of the free-body diagram (6.3). The rst one of this two relations
yields with equation (6.1.4)
F
1x
= S
1
cos =
F cos
2 cos
, and nally F
1x
=
1
2
F, (6.1.7)
and the second one implies
F
1y
= S
1
sin =
F sin
2 cos
, and nally F
1y
=
1
2
F. (6.1.8)
The equilibrium conditions of forces at the node 3 are given in horizontal direction by

F
H
= 0 = F
3x
S
2
cos , (6.1.9)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
164 Chapter 6. Exercises
and in vertical direction by

F
V
= 0 = F
3y
S
2
sin , (6.1.10)
see also the left-hand side of the free-body diagram (6.3). The rst one of this two relations
yields with equation (6.1.4)
F
3x
= S
2
cos =
F cos
2 cos
, and nally F
3x
=
1
2
F, (6.1.11)
and the second one implies
F
3y
= S
2
sin =
F sin
2 cos
, and nally F
3y
=
1
2
F. (6.1.12)
The stresses in normal direction of the bars and the nodal displacements could be computed
with the relations described in one of the following sections, see also equations (6.1.17)-(6.1.19).
The important thing to notice here is, that there are overall 6 equations to solve, in order to
compute 6 unknown quantities F
1x
, F
1y
, F
3x
, F
3y
, S
1
, and S
2
. This is characteristic for a statically
determinate truss, resp. system, and the other case is discussed in the following section.
6.1.2 A Simple Statically Indeterminate Plane Truss
_
x
'
y
1
I

x
1
2
II

x
2
3
III
'
x
3
4
_
F
x
= 10kN
'
F
y
= 2kN

1

2

3
`

h
(EA)
i
= const. axial rigidity

1
= 30

2
= 60

3
= 90

h = 5, 0m
Figure 6.4: A simple statically indeterminate plane truss.
The truss given in the sketch in gure (6.4) is statically indeterminate, i.e. it is impossible to
compute all the reactive forces just with the equilibrium conditions for the forces at the different
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.1. Application of Matrix Calculus on Bars and Plane Trusses 165
_
x
'
y
2

S
II

x
2
_
F
2x
'
F
2y
4
_
F
x
= 10kN
'F
y
= 2kN

S
I
r
S
II
'
S
III

2
Figure 6.5: Free-body diagrams for the nodes 2 and 4.
nodes! In order to get enough equations for computing all unknown quantities, it is necessary
to use additional equations, like the ones given in equations (6.1.17)-(6.1.19). For example the
equilibrium condition of horizontal forces at node 4 is given by

F
H
= 0 = F
x
S
II
cos
2
S
I
cos
1
, (6.1.13)
and in vertical direction by

F
V
= 0 = F
y
S
III
S
II
sin
2
S
I
sin
1
. (6.1.14)
The moment equilibrium condition at this point is of no use, because all lines of action cross the
node 4 itself! The equilibrium conditions for every support contain for every node 1-3 two un-
known reactive forces, one horizontal and one vertical, and one also unknown force in direction
of the bar, e.g. for the node 2,

F
H
= 0 = F
2x
+ S
II
cos
2
, (6.1.15)
and

F
V
= 0 = F
2y
+ S
II
sin
2
. (6.1.16)
Finally summarizing all possible and useful equations and all unknown quantities implies, that
there are overall 9 unknown quantities but only 8 equations! This result implies, that it is neces-
sary to take another way to solve this problem, than using the equilibrium conditions of forces!
6.1.3 Basisc Relations for bars in a Local Coordinate System
An arbitrary bar with its local coordinate system x, y is given in gure (6.6), with the nodal forces
f
i x
, and f
i y
at a point i with i = 1, 2 in the local coordiante system. The following relations hold
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
166 Chapter 6. Exercises
1
2
bar, with A, E, l
`
f
1 y
`
y

f
1 x
S

f
2 x
`
f
2 y

x
Figure 6.6: An arbitrary bar and its local coordinate system x, y.
in the local coordinate system x, y of an arbitrary single bar, see gure (6.6),
stresses
x
=
S
A
, (6.1.17)
kinematics
x
=
du
x
d x
=
u
x
l
, (6.1.18)
material law
x
= E
x
. (6.1.19)
In order to consider the additional relations given above, it is useful to combine the nodal dis-
placements and the nodal forces for the two nodes of the bar in the local coordinate system. The
nodal displacements in the local coordinate system x are given by
u
x
= q
T
u = u
1 x
+ u
2 x
, (6.1.20)
and the nodal forces are given by the equilibrium conditions of forces at the nodes of the bar,

f = q S, (6.1.21)
with
q =
_

_
1
0
1
0
_

_
, u =
_

_
u
1 x
u
1 y
u
2 x
u
2 y
_

_
, and

f =
_

_
f
1 x
f
1 y
f
2 x
f
2 y
_

_
. (6.1.22)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.1. Application of Matrix Calculus on Bars and Plane Trusses 167
The relations between stresses and displacements, see equation (6.1.17), are given by
S = A
x
=
EA
l
u
x
, (6.1.23)
inserting equation (6.1.23) in (6.1.21) implies

f =
EA
l
q q
T
u, (6.1.24)
and nally resumed as the symmetric local stiffness matrix,

f =

K u , with

K =
EA
l
_

_
1 0 1 0
0 0 0 0
1 0 1 0
0 0 0 0
_

_
. (6.1.25)
6.1.4 Basic Relations for bars in a Global Coordinate System
An arbitrary bar with its local coordinate system x, y and a global coordinate systemx, y is given
in gure (6.7). At a point i with i = 1, 2 the nodal forces f
i x
, f
i y
, and the nodal displacements u
i x
,
u
i y
are dened in the local coordiante system. It is also possible to dene the nodal forces f
ix
, f
iy
,
and the nodal displacements v
ix
, v
iy
in the global coordiante system. In order to combine more
1
2
_
x, v
x
'
y, v
y
'v
1y
'f
1y
_
v
1x
_
f
1x
`
f
1 y
`
u
1 y
`
y

u
1 x

f
1 x

Figure 6.7: An arbitrary bar in a global coordinate system.


than one bar, it is necessary to transform the quantities given in each local coordinate system into
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
168 Chapter 6. Exercises
one global coordinate system. This transformation of the local vector quantities, like the nodal
load vector

f, or the nodal displacement vector u, is given by a multiplication with the so called
transformation matrix

Q, which is an orthogonal matrix, given by
a =

Q a , with

Q =
_
cos sin
sin cos
_
. (6.1.26)
Because it is necessary to transform the local quantities for more than one node, the transforma-
tion matrix Q is composed by one submatrix

Q for every node,
Q =
_
_

Q 0
0

Q
_
_
=
_

_
cos sin 0 0
sin cos 0 0
0 0 cos sin
0 0 sin cos
_

_
. (6.1.27)
The local quantities

f, and u in equation (6.1.25) are replaced by the following expressions,
u = Q v , with v =
_

_
v
1x
v
1y
v
2x
v
2y
_

_
, (6.1.28)

f = Q f , with f =
_

_
f
1x
f
1y
f
2x
f
2y
_

_
. (6.1.29)
Inserting this relations in equation (6.1.25) and multiplying with the inverse of the transformation
matrix from the left-hand side yields in the global coordiante system,
f = Q
1

K Q v = K v, (6.1.30)
with the symmetric local stiffness matrix given in the global coordinate system by
K =
EA
l
_

_
cos
2
sin cos cos
2
sin cos
sin cos sin
2
sin cos sin
2

cos
2
sin cos cos
2
sin cos
sin cos sin
2
sin cos sin
2

_
. (6.1.31)
6.1.5 Assembling the Global Stiffness Matrix
There are two different ways of assembling the global stiffness matrix. The rst way considers
the boundary conditions at the beginning, the second one considers the boundary conditions not
until the complete global stiffness matrix for all nodes is assembled. In the rst way the global
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.1. Application of Matrix Calculus on Bars and Plane Trusses 169
stiffness matrix K is assembled by summarizing relations like in equation (6.1.28)-(6.1.31) for
all used elements I-III, and all given boundary conditions, like the ones given by the supports
at the nodes 1-3,
v
1x
= v
2x
= v
3x
= 0, (6.1.32)
v
1y
= v
2y
= v
3y
= 0. (6.1.33)
With this conditions the rigid body movement is eliminated from the system of equations, resp.
the assembled global stiffness matrix, than only the unknown displacements at node 4 remain,
v
4x
, and v
4y
. (6.1.34)
This implies, that it is sufcient to determine the equilibrium conditions only at node 4. The
result is, that it is sufcient to consider only one submatrix

K
i
for every element I-III. For
example the complete equilibrium conditions for bar III are given by
_

_
f
3x
f
3y
f
4x
f
4y
_

_
III
=
_
. . . . . .
. . .

K
3
_
_

_
v
3x
v
3y
v
4x
v
4y
_

_
III
=
_

_
0
0
v
4x
v
4y
_

_
III
. (6.1.35)
Finally the equilibrium conditions at node 4 with summarizing the bars i = I-III is given by
3

i=1
_
f
i
4x
f
i
4y
_
=
_
F
x
F
y
_
=
3

i=1
_

K
i
_
_
v
4x
v
4y
_
, (6.1.36)
and in matrix notation given by
P = K v, (6.1.37)
with the so called compatibility conditions at node 4 given by
v
I
4x
= v
II
4x
= v
III
4x
, (6.1.38)
v
I
4y
= v
II
4y
= v
III
4y
. (6.1.39)
By using this way of assembling the reduced global stiffness matrix the boundary conditions are
already implemented in every element stiffness matrix. The second way to assemble the reduced
global stiffness matrix starts with the unreduced global stiffness matrix given by
3

i=1
_

_
f
i
1x
f
i
1y
f
i
2x
f
i
2y
f
i
3x
f
i
3y
f
i
4x
f
i
4y
_

_
=
_

_
K
11
0 0 K
14
0 K
22
0 K
24
0 0 K
33
K
34
K
41
K
42
K
43
K
_

_
_

_
v
1x
v
1y
v
2x
v
2y
v
3x
v
3y
v
4x
v
4y
_

_
=
_

_
0
0
0
0
0
0
F
x
F
y
_

_
, (6.1.40)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
170 Chapter 6. Exercises
and in matrix notation given by
K v = P. (6.1.41)
Each element resp. each bar could be described by a combination i, j of the numbers of nodes
used in this special element. For example for the bar (element) III the submatrices K
33
, K
34
,
K
43
, and a part of K, like in equation (6.1.36) are implemented in the unreduced global stiffness
matrix. After inserting the submatrices K
ij
for the various elements and considering the bound-
ary conditions given by equations (6.1.32)-(6.1.33) the reduced global stiffness matrix is given
by
P = K v, (6.1.42)
resp.
_
F
x
F
y
_
=
EA
_
3 +

3
_
8h
_
1 1
1 3
_ _
v
4x
v
4y
_
, (6.1.43)
see also equation (6.1.36) and (6.1.37) of the rst way. But this computed reduced global stiff-
ness matrix is not the desired result, because the nodal displacements and forces are the desired
quantities.
6.1.6 Computing the Displacements
The nodal displacements are computed by inverting the relation (6.1.43),
v = K
1
P. (6.1.44)
The inversion of a 2 2-matrix is trivial and given by
K
1
=
1
det K

K =
(8h)
2
2
_
3 +

3
_
2
(EA)
2
EA
_
3 +

3
_
8h
_
3 1
1 1
_
, (6.1.45)
and nally the inverse of the stiffness matrix is given by
K
1
=
4h
EA
_
3 +

3
_
_
3 1
1 1
_
. (6.1.46)
The load vector P at the node 4, see gure (6.4), is given by
P =
_
10
2
_
, (6.1.47)
and by inserting equations (6.1.46), and (6.1.47) in relation (6.1.44) the nodal displacements at
node 4 are given by
v =
_
v
4x
v
4y
_
=
4h
EA
_
3 +

3
_
_
28
8
_
. (6.1.48)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.1. Application of Matrix Calculus on Bars and Plane Trusses 171
6.1.7 Computing the Forces in the bars
The forces in the various bars are computed by solving the relations given by the equation
(6.1.28)-(6.1.31) for each element i, resp. for each bar,
f
i
= K
i
v
i
. (6.1.49)
For example for the bar III, with = 90

, the symmetric local stiffness matrix and the nodal


displacements are given by
K
3
=
EA
h
_

_
0 0 0 0
0 1 0 1
0 0 0 0
0 1 0 1
_

_
, v
3
=
_

_
0
0
v
4x
v
4y
_

_
, (6.1.50)
with sin 90

= 1, and cos 90

= 0 in equation (6.1.31). The forces in the bars are given in the


global coordinate system, see equation (6.1.30), by
f
3
= K
3
v
3
=
4
3 +

3
_

_
0
8
0
8
_

_
III
=
_

_
0
6, 762
0
6, 762
_

_
III
=
_

_
f
3x
f
3y
f
4x
f
4y
_

_
III
, (6.1.51)
and in the local coordinate system associated to the bar III, see equation (6.1.29), by

f
3
= Q
3
f
3
=
_

_
f
3 x
f
3 y
f
4 x
f
4 y
_

_
=
_

_
f
3y
f
3x
f
4y
f
4x
_

_
=
_

_
6, 762
0
6, 762
0
_

_
. (6.1.52)
Comparing this result with the relation (6.1.21) implies nally the force S
III
in direction of the
bar,
S
III
= f
3 x
= f
4 x
= 6, 762kN, (6.1.53)
and for the bars I and II,
S
I
= 8, 56kN , and S
II
= 5, 17kN. (6.1.54)
Comparing this results as a probe with the equilibirumconditions given by the equations (6.1.13)-
(6.1.14), in horizontal direction,

F
H
= 0 = F
x
S
II
cos
2
S
I
cos
1
= 2, 0 8, 56
1
2
5, 17

3
2
+ 6, 762 1 = 4, 7 10
3
0, (6.1.55)
and in vertical direction,

F
V
= 0 = F
y
S
III
S
II
sin
2
S
I
sin
1
= 10, 0 8, 56

3
2
5, 17
1
2
+ 6, 762 0 = 1, 8 10
3
0. (6.1.56)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
172 Chapter 6. Exercises
6.1.8 The Principle of Virtual Work
This nal section will give a small outlook on the use of the matrix calculus described above. The
virtual work
1
for a bar under a constant line load, or also called the weak form of equilibrium,
is given by
W =
_
N
x
d x +
_
pu
x
d x = 0. (6.1.57)
The force in normal direction of a bar, see also equations (6.1.17)-(6.1.19), is given by
N = EA
x
. (6.1.58)
With this relation the equation (6.1.57) is rewritten,
W =
_

T
x
EA
x
d x +
_
pu
x
d x = 0. (6.1.59)
The vectors given by the equations (6.1.22), these are the various quantities w.r.t. the local
variable x in equation (6.1.57) could be described by
displacement u
x
= q
T
u, (6.1.60)
strain u
, x
=
x
= q
T
, x
u, (6.1.61)
virtual displacement u
x
= q
T
u, (6.1.62)
virtual strain u
, x
=
x
= q
T
, x
u, (6.1.63)
and the constant nodal values given by the vector u. The vectors q are the so called shape
functions, but in this very simple case, they include only constant values, too. In general this
shape functions are dependent of the position vector, for example in this case the local variable
x. This are some of the basic assumptions for nite elements. Inserting the relations given by
(6.1.60)-(6.1.63), in equation (6.1.59) the virtual work in one element, resp. one bar, is given by
W =
_
_
q
T
, x
u
_
T
EAq
T
, x
ud x +
_
pq
T
ud x = 0. (6.1.64)
= u
T
__
q
, x
EAq
T
, x
d x
_
u +
__
pq
T
d x
_
u = 0. (6.1.65)
These integrals just describe one element, resp. one bar, but if a summation over more elements
is introduced like this,
3

i=1
W
i
=
3

i=1
_

_
u
i
_
T
__
q
, x
EAq
T
, x
d x
_
u
i
+
__
p
i
q
T
d x
_
u
i
_
= 0, (6.1.66)
and nally like this
3

i=1
_
_
u
i
_
T
__
q
, x
EAq
T
, x
d x
_
u
i
_
=
3

i=1
___
p
i
q
T
d x
_
u
i
_
. (6.1.67)
1
See also the refresher course on strength of materials.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.1. Application of Matrix Calculus on Bars and Plane Trusses 173
It is easy to see, that the left-hand side is very similar to the stiffness matrices, and that the right-
hand side is very similar to the load vectors in equations (6.1.36), and (6.1.37), or (6.1.42), and
(6.1.43). This simple example shows the close relations between the principle of virtual works,
the nite element methods, and the matrix calculus described above.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
174 Chapter 6. Exercises
6.2 Calculating a Structure with the Eigenvalue Problem
6.2.1 The Problem
Establish the homogeneous system of equations for the structure, see sketch (6.8) of rigid bars,
in order to determine the critical load F
c
= F
critical
! Assume that the structure is geometrically
linear, i.e. the angles of excursion are so small that cos = 1, and sin = 0 are good
approximations. The given values are
k
1
= k , k
2
=
1
2
k , k = 10
kN
cm
, and l = 200cm.
_
x
'
y
EJ = EJ = EJ =
'
x
1
'
x
2
_
F
critical
k
1
k
2

3
2
l

2l

3
2
l
Figure 6.8: The given structure of rigid bars.
Rewrite the system of equations so that the general eigenvalue problem for the critical load
F
c
is given by
A x = F
c
B x.
Transform the general eigenvalue problem into a special eigenvalue problem.
Calculate the eigenvalues, i.e. the critical loads F
c
, and the associated eigenvectors.
Check if the eigenvectors are orthogonal to each other.
Transform the equation system in such a way, that it is possible to compute the Rayleigh
quotient. What quantity could be estimated with the Rayleigh quotient?
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.2. Calculating a Structure with the Eigenvalue Problem 175
6.2.2 The Equilibrium Conditions after the Excursion
In a rst step the relations between the variables x
1
, x
2
, and the reactive forces F
Ay
, and F
By
in
the supports are solved. For this purpose the equilibrium conditions of moments are established
for two subsystems, see sketch (6.9). After that in a second step the equilibrium conditions for
the whole system are established. The moment equation w.r.t. the node D of the subsystem on
_
x
'
y
A
C

S
I D

S
III
B
_
F
Ax
'
F
Ay
_
F
critical
'
F
By
`

x
1
`

x
2

3
2
l

3
2
l
Figure 6.9: The free-body diagrams of the subsystems left of node C, and right of node D after
the excursion.
the right-hand side of node D after the excursion implies

M
D
= 0 = F
By

3
2
l + F
c
x
2
F
By
=
2F
c
3l
x
2
, (6.2.1)
and the moment equation w.r.t. the node C of the subsystem on the left-hand side of node C after
the excursion implies with the following relation (6.2.3),

M
C
= 0 = F
Ay

3
2
l + F
Ax
x
1
F
Ay
=
2F
Ax
3l
x
1
=
2F
c
3l
x
1
. (6.2.2)
At any time, and any possible excursion, or for any possible load F
c
the complete system, cf.
(6.10), must satisfy the equilibrium conditions. The equilibrium condition of forces in horizontal
direction, cf. (6.10), after the excursion is given by

F
H
= 0 = F
Ax
F
c
F
Ax
= F
c
. (6.2.3)
The moment equation w.r.t. the node A for the complete system implies

M
A
= 0 = F
By
5l + k
2
x
2

7
2
l + k
1
x
1

3
2
l, (6.2.4)
with (6.2.1) and k
2
=
1
2
k
0 =
2F
c
3l
x
2
5l + kx
2

7
4
l + kx
1

3
2
l,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
176 Chapter 6. Exercises
_
x
'
y
A
C
D
B
_
F
Ax
'
F
Ay
_
F
critical
'
F
By
'
k
1
x
1
'
k
2
x
2
`

x
1
`

x
2

3
2
l

2l

3
2
l
Figure 6.10: The free-body diagram of the complete structure after the excursion.
and nally
3
2
kx
1
+
7
4
kx
2
=
10
3
F
c
l
x
2
. (6.2.5)
The equilibrium of forces in vertical direction implies

F
V
= 0 = F
Ay
+ F
By
+ k
2
x
2
+ k
1
x
1
, (6.2.6)
with (6.2.1), (6.2.2), k
1
= k, and k
2
=
1
2
k,
0 =
2F
c
3l
x
1

2F
c
3l
x
2
+
1
2
kx
2
+ kx
1
,
and nally
kx
1
+
1
2
kx
2
=
2
3
F
c
l
(x
1
+ x
2
) . (6.2.7)
The relations (6.2.5), and (6.2.7) are combined in a system of equations, given by
_
3
2
k
7
4
k
k
1
2
k
_ _
x
1
x
2
_
= F
c
_
0
10
3l
2
3l
2
3l
_ _
x
1
x
2
_
, (6.2.8)
or in matrix notation
A x = F
c
B x. (6.2.9)
This equation system is a general eigenvalue problem, with the F
ci
being the eigenvalues, and
the eigenvectors x
i
0
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.2. Calculating a Structure with the Eigenvalue Problem 177
6.2.3 Transformation into a Special Eigenvalue Problem
In order to solve this general eigenvalue problem with the aid of the characteristic equation it is
necessary to transform the general eigenvalue problem into a special eigenvalue problem. Thus
the equation (6.2.9) is multiplied with the inverse of matrix B from the left-hand side,
B
1
[ A x = F
c
B x, (6.2.10)
B
1
A x = F
c
B
1
B x,
after this multiplication both terms are rewritten on one side and the vector x is factored out,
0 = B
1
A x F
c
1 x,
and nally the special eigenvalue problem is given by
0 = (C x 1 F
c
) x , with C = B
1
A. (6.2.11)
The inverse of matrix B is assumed by
B
1
=
_
a b
c d
_
, (6.2.12)
and the following relations must hold
B
1
B = 1 , resp.
_
a b
c d
_ _
0
10
3l
2
3l
2
3l
_
=
_
1 0
0 1
_
. (6.2.13)
This simple inversion implies
b =
3
2
l, (6.2.14)
d = 0, (6.2.15)
10
3l
a +
2
3l
b = 0 a =
1
5
b a =
3
10
l, (6.2.16)
10
3l
c +
2
3l
d = 1 c =
3
10
l, (6.2.17)
and nally the inverse B
1
is given by
B
1
=
_

3
10
l
3
2
l
3
10
l 0
_
. (6.2.18)
The matrix C for the special eigenvalue problem is computed by the multiplication of the two
2 2-matrices B
1
, and A like this,
C = B
1
A =
_

3
10
l
3
2
l
3
10
l 0
_ _
3
2
k
7
4
k
k
1
2
k
_
=
_
21
20
kl
9
40
kl
9
20
kl
21
40
kl
_
,
and nally the matrix C for the special eigenvalue problem is given by
C =
3
40
kl
_
14 3
6 7
_
. (6.2.19)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
178 Chapter 6. Exercises
6.2.4 Solving the Special Eigenvalue Problem
In order to solve the special eigenvalue problem, the characteristic equation is set up by comput-
ing the determinant of equation (6.2.11),
det (C F
c
1) = 0, (6.2.20)
resp. in complete notation,
det
_
21
20
kl F
c
9
40
kl
9
20
kl
21
40
kl F
c
_
= 0. (6.2.21)
Computing the determinant yields
_
21
20
kl F
c
__
21
40
kl F
c
_

81
800
k
2
l
2
= 0,
441
800
k
2
l
2

21
20
klF
c

21
40
klF
c
+ F
2
c

81
800
k
2
l
2
= 0,
and nally implies the quadratic equation,
360
800
k
2
l
2

63
40
klF
c
+ F
2
c
= 0. (6.2.22)
Solving this simple quadratic equation is no problem,
F
c1/2
=
63
80
kl

_
63
40
_
2
k
2
l
2
4

360
600
k
2
l
2
,
F
c1/2
=
63
80
kl
_
1089
6400
kl,
F
c1/2
=
63
80
kl
33
80
kl, (6.2.23)
and nally implies the two real eigenvalues,
F
c1
=
6
5
kl = 2400kN , and F
c2
=
3
8
kl = 750kN. (6.2.24)
The eigenvectors x
1
0
, x
2
0
are computed by inserting the eigenvalues F
c1
, and F
c2
in equation
(6.2.11), given by
(C F
ci
1) x
i
0
= 0, (6.2.25)
and in complete notation by
_
C
11
F
ci
C
12
C
21
C
22
F
ci
_ _
x
i
01
x
i
02
_
=
_
0
0
_
. (6.2.26)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.2. Calculating a Structure with the Eigenvalue Problem 179
It is possible to choose for each eigenvalue F
ci
the rst component of of the associated eigenvec-
tor like this,
x
i
01
= 1, (6.2.27)
and after this to compute the second components of the eigenvectors,
x
i
02
=
C
11
F
ci
C
12
x
i
01
, resp. x
i
02
=
C
11
F
ci
C
12
, (6.2.28)
or
x
i
02
=
C
21
C
22
F
ci
x
i
01
, resp. x
i
02
=
C
21
C
22
F
ci
. (6.2.29)
Inserting the rst eigenvalue F
c1
in equation (6.2.28) implies the second component x
1
02
of the
rst eigenvector,
x
1
02
=
21
20
kl
6
5
kl
9
40
kl
x
1
02
=
2
3
, (6.2.30)
and for the second eigenvalue F
c2
the second component x
2
02
of the second eigenvector is given
by
x
2
02
=
21
20
kl
3
8
kl
9
40
kl
x
2
02
= 3. (6.2.31)
With this results the eigenvectors are nally given by
x
1
0
=
_
1
2
3
_
, and x
2
0
=
_
1
3
_
. (6.2.32)
6.2.5 Orthogonal Vectors
It is sufcient to compute the scalar product of two arbitrary vectors, in order to check, if this
two vectors are orthogonal to each other, i.e.
x
1
x
2
, resp. x
1
x
2
= 0. (6.2.33)
In this special case the scalar product of the two eigenvectors is given by
x
1
0
x
2
0
=
_
1
2
3
_
x
2
0
=
_
1
3
_
= 1 2 = 1 ,= 0, (6.2.34)
i.e. the eigenvectors are not orthogonal to each other. The eigenvectors for different eigenvalues
are only orthogonal, if the matrix C of the special eigenvalue problem is symmetric
2
. If the
matrix of a special eigenvalue problem is symmetric all eigenvalues are real, and all eigenvectors
are orthogonal. In this case all eigenvalues F
c1
, F
c2
are real, but the matrix C is not symmetric,
and for that reason the eigenvectors x
1
0
, x
2
0
are not orthogonal.
2
See script, section about matrix eigenvalue problems
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
180 Chapter 6. Exercises
6.2.6 Transformation
The special eigenvalue problem (6.2.11) includes an anti-symmetric matrix C. In order to deter-
mine the Rayleigh quotient it is necessary to have a symmetric matrix in the special eigenvalue
problem
3
,
(C F
c
1) x = 0, (6.2.35)
and in detail
__
21
20
kl
9
40
kl
9
20
kl
21
40
kl
_
F
c
_
1 0
0 1
___
x
1
x
2
_
=
_
0
0
_
. (6.2.36)
The rst step is to transform the matrix C into a symmetric matrix. Because the matrices are
such simple it is easy to see, that if the second column of the matrix is multiplied with 2, the
matrix becomes symmetric,
__
21
20
kl
9
20
kl
9
20
kl
21
20
kl
_
F
c
_
1 0
0 2
___
x
1
1
2
x
2
_
=
_
0
0
_
, (6.2.37)
and in matrix notation with the new dened matrices D, and E
2
, and a new vector q
_
D F
c
E
2
_
q = 0. (6.2.38)
Because the matrix E
2
= E
T
E is a diagonal and symmetric matrix, the matrices E, and E
T
are
diagonal and symmetric matrices, too,
E
2
=
_
1 0
0 2
_
E = E
T
=
_
1 0
0

2
_
. (6.2.39)
Because the matrix E is a diagonal and symmetric matrix, the inverse E
1
is a diagonal and
symmetric matrix, too,
E
1
=
_
a b
c d
_
E
1
E =
_
a b
c d
_ _
1 0
0

2
_
=
_
1 0
0 1
_
= 1, (6.2.40)
and this implies nally
E
1
=
_
1 0
0
1
2

2
_
, and E
1
=
_
E
1
_
T
. (6.2.41)
The equation (6.2.38) is again a general eigenvalue problem, but now with a symmetric matrix
D. But in order to compute the Rayleigh quotient, it is necessary to set up a special eigenvalue
problem again. In the next step the identity 1 = E
1
E is inserted in equation (6.2.38), like this,
_
D F
c
E
2
_
E
1
E q = 0, (6.2.42)
3
See script, section about matrix eigenvalue problems
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.2. Calculating a Structure with the Eigenvalue Problem 181
than the whole equation is multiplied with E
1
from the left-hand side,
_
E
1
D E
1
F
c
E
1
E
T
E E
1
_
E q = 0, (6.2.43)
and with the relation E = E
T
for symmetric matrices,
_
E
1
D E
1
F
c
1 1
_
E q = 0, (6.2.44)
With relation (6.2.41) the rst term in equation (6.2.44) describes a congruence transformation,
so that the matrix F keeps symmetric
4
,
F = E
1
D E
1
=
_
E
1
_
T
D E
1
, (6.2.45)
and with the matrix F given by
F =
_
1 0
0
1
2

2
_ _
21
20
kl
9
20
kl
9
20
kl
21
20
kl
_ _
1 0
0
1
2

2
_
=
_
21
20
kl
9
20

2
kl
9
20

2
kl
21
40
kl.
_
(6.2.46)
Furthermore a new vector p is dened by
p = Eq =
_
1 0
0

2
_ _
x
1
1
2
x
2
_
p =
_
x
1
1
2

2x
2
_
. (6.2.47)
Finally combining this results implies a special eigenvalue problem with a symmetric matrix F
and a vector p,
(F F
c
1) p = 0, (6.2.48)
Computing the characteristic equation like in equations (6.2.20), and (6.2.21) yields
det (F F
c
1) = 0, (6.2.49)
resp. in complete notation,
det
_
21
20
kl F
c
9
20

2
kl
9
20

2
kl
21
40
kl F
c
_
= 0, (6.2.50)
and this nally implies the same characteristic equation like in (6.2.22),
_
21
20
kl F
c
__
21
40
kl F
c
_

81
800
k
2
l
2
= 0. (6.2.51)
Having the same characteristic equation implies, that this problem has the same eigenvalues, i.e.
it is the same eigenvalue problem, but just another notation. With this symmetric eigenvalue
problem it is possible to compute the Rayleigh quotient,

1
= R
_
p

_
=
p
T

p
+1
p
T

=
p
T

F p

p
T

, with
1
F
c1
, (6.2.52)
with an approximated vector p

. The Rayleigh quotient


1
is a good approximation of a lower
bound for the dominant eigenvalue.
4
See script, section about the charateristics of congruence transformations.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
182 Chapter 6. Exercises
6.3 Fundamentals of Tensors in Index Notation
6.3.1 The Coefcient Matrices of Tensors
Like in the lectures a tensor could be described by a coefcient matrix f
ij
, and a basis given by

ij
. First do not look at the basis, just look at the coefcient matrices. In this exercise some
of the most important rules to deal with the coefcient matrices are recapitulated. The Einstein
summation convention implies for the coefcient matrix A
.j
i
, and an arbitrary basis
i
.j
,
A
.j
i

i
.j
=
3

i=1
3

j=1
A
.j
i

i
.j
,
with the coefcient matrix A given in matrix notation by
A =
_
A
.j
i

=
_
_
A
.1
1
A
.2
1
A
.3
1
A
.1
2
A
.2
2
A
.3
2
A
.1
3
A
.2
3
A
.3
3
_
_
.
The dot in the superscript index of the expression A
.j
i
shows which one of the indices is the rst
index, and which one is the second index. This dot represents an empty space, so in this case it is
easy to see, that the subscript index i is the rst index, and the superscript index j is the second
index, i.e. the index i is the row index, and the j is the column index of the coefcient matrix!
This is important to know for the multiplication of coefcient matrices. For example, what is is
the difference between the following products of coefcient matrices,
A
ij
B
jk
= ? , and A
ij
B
kj
= ? .
The left-hand side of gure (6.11) sketches the rst product, and the right-hand side the second
A
ij
B
jk
=?
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
C
.1
1
C
.2
1
C
.3
1
C
.1
2
C
.2
2
C
.3
2
C
.1
3
C
.2
3
C
.3
3
B
11
B
12
B
13
B
21
B
22
B
23
B
31
B
32
B
33

j

j
A
ij
B
kj
=?
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
D
.1
1
D
.2
1
D
.3
1
D
.1
2
D
.2
2
D
.3
2
D
.1
3
D
.2
3
D
.3
3
B
11
B
12
B
13
B
21
B
22
B
23
B
31
B
32
B
33

j
Figure 6.11: Matrix multiplication.
product. This implies the following important relations,
A
ij
B
jk
= C
.k
i
[A
ij
]
_
B
jk

=
_
C
.k
i

A B = C,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.3. Fundamentals of Tensors in Index Notation 183
and
A
ij
B
kj
= A
ij
_
B
jk
_
T
= D
.k
i
[A
ij
]
_
B
kj

= [A
ij
]
_
B
jk

T
=
_
D
.k
i

A B
T
= D,
because a matrix with exchanged columns and rows is the transpose of the matrix. As a short
recap the product of a square matrix and a column matrix, resp. a (column) vector, is given by
A u = v
_
_
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
_
_
_
_
u
1
u
2
u
3
_
_
=
_
_
A
11
u
1
+ A
12
u
2
+ A
13
u
3
A
21
u
1
+ A
22
u
2
+ A
23
u
3
A
31
u
1
+ A
32
u
2
+ A
33
u
3
_
_
A
ij
u
j
= v
i
.
For example some products of coefcient matrices in index and matrix notation,
A
ij
B
kj
C
kl
= A
ij
_
B
jk
_
T
C
kl
= D
il
A B
T
C = D,
A
.j
i
B
kj
C
.k
l
D
ml
= A
.j
i
(B
jk
)
T
_
C
k
.l
_
T
_
D
lm
_
T
= E
.m
i
A B
T
C
T
D
T
= E,
A
ij
B
kj
u
k
= A
ij
_
B
jk
_
T
u
k
= v
i
A B
T
u = v,
u
i
B
ij
u
j
= u
T
B v = .
Furthermore it is important to notice, that the dummy indices could be renamed arbitrarily,
A
ij
B
kj
= A
im
B
km
, or A
kl
v
l
= A
kj
v
j
, etc.
6.3.2 The Kronecker Delta and the Trace of a Matrix
The Kronecker delta is dened by

i
j
=
i
j
=
ij
=
ij
=
_
1 , iff i = j
0 , iff i ,= j
.
The Kronecker deltas
ij
, and
ij
are only dened in a Cartesian basis, where they represent the
metric coefcients. The other ones are dened in every basis, and in order to use the summation
convention, it is useful to prefer this notation with super- and subscript indices. Because the
Kronecker delta is the symmetric identity matrix, it is not necessary to differentiate the column
and row indices in index notation. As a rule of thumb, multiplication with a Kronecker delta
substitues an index in the same position,
v
k

k
j
= v
j
v I = v,
A
.k
i

j
k
= A
.j
i
A I = A,
A
im

s
i

k
m
= A
sk
A I I = A.
But what is described by
A
.l
k

l
i

i
k
= A
.i
i
?
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
184 Chapter 6. Exercises
This is the sum over all main diagonal elements, or just called the trace of the matrix,
tr A = A
1
1
+ A
2
2
+ A
3
3
= tr
_
A
j
i

,
and because the trace is a scalar quantity, it is independent of the basis, i.e. it is an invariant,
tr A = tr
_
A
j
i

= tr
_

A
j
i
_
, i.e. A
i
i
=

A
i
i
.
For example in the 2-dimensional vector space E
2
the Kronecker delta is dened by

g
1

g
2
g
2

g
1
Figure 6.12: Example of co- and contravariant base vectors in E
2
.
g
i
g
k
=
k
i
=
_
1 i = k
0 i ,= k
,
than an arbitrary co- and contravariant basis is given by
g
1
g
2
= 0 g
1
g
2
,
g
2
g
1
= 0 g
2
g
1
,
g
1
g
1
= 1,
g
2
g
2
= 1.
6.3.3 Raising and Lowering of an Index
If the vectors g
i
and g
k
are in the same space V, it must be possible to describe g
k
by a product
of g
i
and some coefcient like A
km
,
g
k
= A
km
g
m
.
Both sides of the equations are multiplied with g
i
, and nally the index i is renamed by m,
g
k
g
i
= A
km
g
m
g
i
g
ki
= A
km

i
m
g
ki
= A
ki
g
km
= A
km
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.3. Fundamentals of Tensors in Index Notation 185
Something similar for the coefcients of a vector x is given by
x = x
i
g
i
= x
i
g
i
x
i
g
ij
g
j
= x
i
g
i
x
i
g
ij
= x
j
.
The
g
ki
= g
ik
are called the covariant metric coefcients,
and the
g
ki
= g
ik
are called the contravariant metric coefcients.
Finally this implies for the base vectors and for the coefcients or coordinates of vectors and
tensors, too. This implies, raising an index with the contravariant metric coefcients is given by
g
k
= g
ki
g
i
, x
k
= g
ki
x
i
, and A
ik
= g
ij
A
.k
j
,
and lowering an index with the covariant metric coefcients is given by
g
k
= g
ki
g
i
, x
k
= g
ki
x
i
, and A
ik
= g
ij
A
j
.k
.
The relations between the co- and contravariant metric coefcients are given by
g
k
= g
km
g
m
g
k
g
i
= g
km
g
m
g
i

k
i
= g
km
g
mi
.
Comparing this with A
1
A = I implies
1 =
_
g
km

[g
mi
]
_
g
km

= [g
mi
]
1
det
_
g
ik

=
1
det [g
ik
]
.
Than the determinants of the co- and contravariant metric coefcients are dened by
det [g
ik
] = g , and det
_
g
ik

=
1
g
.
6.3.4 Permutation Symbols
The cross products of the Cartesian base vectors e
i
in the 3-dimensional Euclidean vector space
E
3
are given by
e
1
e
2
= e
3
= e
3
, e
2
e
3
= e
1
= e
1
, and e
3
e
1
= e
2
= e
2
,
e
2
e
1
= e
3
= e
3
, e
3
e
2
= e
1
= e
1
, and e
1
e
3
= e
2
= e
2
.
Often the cross product is also described by a determinant,
u v =

e
1
e
2
e
3
u
1
u
2
u
3
v
1
v
2
v
3

= e
1
(u
2
v
3
u
3
v
2
) +e
2
(u
3
v
1
u
1
v
3
) +e
3
(u
1
v
2
u
2
v
1
) .
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
186 Chapter 6. Exercises
The permutation symbol in Cartesian coordinates is given by
e
ijk
=
_

_
+1 , iff (i, j, k) is an even permutation of (1, 2, 3),
1 , iff (i, j, k) is an odd permutation of (1, 3, 2),
0 , if two or more indices are equal.
The cross products of the Cartesian base vectors could be described by the permutaion symbols
like this,
e
i
e
j
= e
ijk
e
k
,
and for example
e
1
e
2
= e
121
e
1
+ e
122
e
2
+ e
123
e
3
= 0 e
1
+ 0 e
2
+ 1 e
3
= e
3
e
1
e
3
= e
131
e
1
+ e
132
e
2
+ e
133
e
3
= 0 e
1
+ (1) e
2
+ 0 e
3
= e
2
.
The general permutation symbol is given by the covariant symbol,

ijk
=
_

_
+

g , iff (i, j, k) is an even permutation of (1, 2, 3),

g , iff (i, j, k) is an odd permutation of (3, 2, 1),


0 , if two or more indices are equal,
or by the contravariant symbol,

ijk
=
_

_
+
1

g
if (i, j, k) is an even permutation of (1, 2, 3),

g
if (i, j, k) is an odd permutation of (3, 2, 1),
0 if two or more indices are equal.
With this relations the cross products of covariant base vectors are given by
g
i
g
j
=
ijk
g
k
,
and for the corresponding contravariant base vectors
g
i
g
j
=
ijk
g
k
,
and the following relations between the Cartesian and the general permutation symbols hold

ijk
=

ge
ijk
, and e
ijk
=
1

ijk
,
and

ijk
=
1

g
e
ijk
, and e
ijk
=

g
ijk
.
An important relation, in order to simplify expressions with permutation symbols, is given by
e
ijk
e
mnk
=
_

i
m

j
n

i
n

j
m
_
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.3. Fundamentals of Tensors in Index Notation 187
6.3.5 Exercises
1. Simplify the index notation expressions and write down the matrix notation form.
(a) A
.j
i
B
kj
C
k
.l
=
(b) A
ij
B
ik
C
kl
=
(c) C
.n
m
D
.m
n
=
(d) D
mn
E
m
.l
u
l
=
(e) u
i
D
.i
j
E
.j
k
=
2. Simplify the index notation expressions.
(a) A
ij
g
jk
=
(b) A
ij

j
k
=
(c) A
ij
B
jk

i
m

n
k
=
(d) A
kl

i
j

j
m
g
ml
=
(e) A
ij
B
kj
g
im
g
kn

n
m
=
(f) A
.l
k
B
km
g
mi
g
in

n
j
=
3. Rewrite these expressions in index notation w.r.t. a Cartesian basis, and describe what kind
of quantity the result is.
(a) (a b) c =
(b) a b + (a d) c =
(c) (a b) (c d) =
(d) a (b c) =
4. Combine the base vectors of a general basis and simplify the expressions in index notation.
(a) u v = (u
i
g
i
) (v
j
g
j
) =
(b) u v = (u
i
g
i
) (v
j
g
j
) =
(c) u v = (u
i
g
i
) (v
j
g
j
) =
(d) u v = (u
i
g
i
) (v
j
g
j
) =
(e) (u v) w = [(u
i
g
i
) (v
j
g
j
)]
_
w
k
g
k
_
=
(f) (u v) (wx) = [(u
i
g
i
) (v
j
g
j
)]
__
w
k
g
k
_

_
x
l
g
l
_
=
(g) u (v w) = (u
i
g
i
)
_
(v
j
g
j
)
_
w
k
g
k
_
=
(h) (u v) (wx) = [(u
i
g
i
) (v
j
g
j
)]
__
w
k
g
k
_

_
x
l
g
l
_
=
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
188 Chapter 6. Exercises
6.3.6 Solutions
1. Simplify the index notation expressions and write down the matrix notation form.
(a) A
.j
i
B
kj
C
k
.l
= A
.j
i
(B
jk
)
T
C
k
.l
= D
il
A B
T
C = D
(b) A
ij
B
ik
C
kl
= (A
ji
)
T
B
ik
C
kl
= D
jl
A
T
B C = D
(c) C
.n
m
D
.m
n
= E
.m
m
= tr (C D) = tr E =
(d) D
mn
E
m
.l
u
l
= (D
nm
)
T
E
m
.l
u
l
= v
n
D
T
E u = v
(e) u
i
D
.i
j
E
.j
k
= u
i
_
D
i
.j
_
T
_
E
j
.k
_
T
= v
k
u
T
D
T
E
T
= v
T
2. Simplify the index notation expressions.
(a) A
ij
g
jk
= A
.k
i
(b) A
ij

j
k
= A
ik
(c) A
ij
B
jk

i
m

n
k
= A
mj
B
jn
= C
.n
m
(d) A
kl

i
j

j
m
g
ml
= A
.i
k
(e) A
ij
B
kj
g
im
g
kn

n
m
= A
m
.j
(B
j
.n
)
T

n
m
= A
m
.j
(B
j
.m
)
T
=
(f) A
.l
k
B
km
g
mi
g
in

n
j
=
_
A
l
.k
_
T
B
kj
= C
lj
3. Rewrite these expressions in index notation w.r.t. a Cartesian basis, and describe what kind
of quantity the result is.
(a) (a b) c = a
i
b
j
e
ijk
c
k
=
(b) a b + (a d) c = v a
i
b
j
e
ijk
+ (a
i
d
i
) c
k
= v
k
(c) (a b) (c d) =
_
a
i
b
j
e
ijk
_
(c
m
d
n
e
mnk
) = a
i
b
j
c
m
d
n
e
ijk
e
mnk
= a
i
b
j
c
m
d
n
_

i
m

j
n

i
n

j
m
_
= a
m
b
n
c
m
d
n
a
n
b
m
c
m
d
n
=
(a c) (b d) (a d) (b c) =
(d) a (b c) = v
e
lkm
a
l
_
e
ijk
b
i
c
j
_
= a
l
b
i
c
j
e
lkm
e
ijk
= a
l
b
i
c
j
_

l
i

m
j

m
i

l
j
_
= a
i
b
i
c
m
a
j
b
m
c
j
= v
m
(a b) c (a c) b = v
4. Combine the base vectors of a general basis and simplify the expressions in index notation.
(a) u v = (u
i
g
i
) (v
j
g
j
) = u
i
v
j
g
i
g
j
= u
i
v
j
g
ij
= u
i
v
i
=
(b) u v = (u
i
g
i
) (v
j
g
j
) = u
i
v
j
g
i
g
j
= u
i
v
j

j
i
= u
i
v
i
=
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.3. Fundamentals of Tensors in Index Notation 189
(c) u v = (u
i
g
i
) (v
j
g
j
) = u
i
v
j
g
i
g
j
= u
i
v
j

ijk
g
k
= w
k
g
k
= w
(d) u v = (u
i
g
i
) (v
j
g
j
)
u
i
v
j
g
i
g
j
= u
i
v
j
g
ik
g
k
g
j
= u
k
v
j

kjl
g
l
= u
i
v
j

ijk
g
k
w
k
g
k
= w
(e) (u v) w = [(u
i
g
i
) (v
j
g
j
)]
_
w
k
g
k
_

u
i
v
j
w
k

ijl
g
l
g
k
= u
i
v
j
w
k

ijl
g
lk
= u
i
v
j
w
l

ijl
u
i
v
j
w
k

ijk
=
(f) (u v) (wx) = [(u
i
g
i
) (v
j
g
j
)]
__
w
k
g
k
_

_
x
l
g
l
_

_
u
i
v
j
g
i
g
j
_ _
w
k
x
l
g
k
g
l
_
=
_
u
i
v
j
g
ij
_ _
w
k
x
l

klm
g
m
_
= u
i
v
i
w
k
x
l

klm
g
m
= y
m
g
m
y
m
g
m
= y
(g) u (v w) = (u
i
g
i
)
_
(v
j
g
j
)
_
w
k
g
k
_

u
i
g
i

_
v
j
w
k
g
kl
g
j
g
l
_
= u
i
v
j
w
l
g
i

jlm
g
m
_
= u
i
v
j
w
l

jlm

imn
g
n
= u
i
v
j
w
l
_

j
i

l
n

j
n

l
i
_
g
n
= u
i
v
i
w
n
g
n
u
i
w
i
v
n
g
n
= x
n
g
n
(u v) w(u w) v = x
(h) (u v) (wx) = [(u
i
g
i
) (v
j
g
j
)]
__
w
k
g
k
_

_
x
l
g
l
_

_
u
i
v
j
g
im
(g
m
g
j
)

_
w
k
x
l
g
ln
_
g
k
g
n
_
=
_
u
m
v
j

mjo
g
o

_
w
k
x
n

knp
g
p

= u
m
v
j

mjo
w
k
x
n

knp
g
o
g
p
= u
m
v
j

mjo
w
k
x
n

knp

o
p
= u
m
v
j
w
k
x
n

mjp

knp
= u
m
v
j
w
k
x
n
_

k
m

n
j

n
m

k
j
_
= u
k
v
n
w
k
x
n
u
n
v
k
w
k
x
n
(u w) (v x) (u x) (v w) =
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
190 Chapter 6. Exercises
6.4 Various Products of Second Order Tensors
6.4.1 The Product of a Second Order Tensor and a Vector
The product of a second order tensor and a vector (, i.e. a rst order tensor,) is computed by the
scalar product of the last base vector of the tensor and the base vector of the vector. For example
a product of a second order tensor and a vector is given by
v = Tu =
_
T
ij
g
i

g
j
scalar
product
__
u
k

g
k
_
= T
ij
u
k
_
g
j
g
k
_
g
i
= T
ij
u
k
g
jk
g
i
= T
ij
u
j
g
i
v = Tu = v
i
g
i
, with v
i
= T
ij
u
j
.
6.4.2 The Tensor Product of Two Second Order Tensors
The tensor product of two second order tensors is computed by the scalar product of the two
inner base vectors and the dyadic product of the two outer base vectors. For example a tensor
product is given by
R = TS =
_
T
ij

g
i
dyadic product
g
j

__
S
kl
scalar
product
g
k


g
l
_
= T
ij
S
kl
(g
j
g
k
) (g
i
g
l
)
= T
ij
S
kl
g
jk
g
i
g
l
= T
ij
S
.l
j
g
i
g
l
R = TS = R
il
g
i
g
l
, with R
il
= T
ij
S
.l
j
.
6.4.3 The Scalar Product of Two Second Order Tensors
The scalar product of two second order tensors is computed by the scalar product of the rst base
vectors of the two tensors and the scalar product of the two second base vectors of the tensors,
too. For example a scalar product is given by
= T : S =
_
T
ij

scalar product
g
i
g
j

scalar product
_
:
_
S
kl

g
k
g
l

_
= T
ij
S
kl
(g
i
g
k
) (g
j
g
l
)
= T
ij
S
kl
g
ik
g
jl
= T : S = T
ij
S
ij
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.4. Various Products of Second Order Tensors 191
6.4.4 Exercises
1. Compute the tensor products.
(a) TS = (T
ij
g
i
g
j
)
_
S
kl
g
k
g
l
_
=
(b) TS = (T
ij
g
i
g
j
)
_
S
kl
g
k
g
l
_
=
(c) TS =
_
T
.j
i
g
i
g
j
_ _
S
.l
k
g
k
g
l
_
=
(d) TS = (T
ij
g
i
g
j
)
_
S
.l
k
g
k
g
l
_
=
(e) T1 = (T
ij
g
i
g
j
)
_

l
k
g
k
g
l
_
=
(f) 11 =
_

j
i
g
i
g
j
_ _

l
k
g
k
g
l
_
=
(g) Tg = (T
ij
g
i
g
j
)
_
g
kl
g
k
g
l
_
=
(h) TT = (T
ij
g
i
g
j
)
_
T
kl
g
k
g
l
_
=
(i) TT
T
= (T
ij
g
i
g
j
)
_
T
kl
g
k
g
l
_
T
=
2. Compute the scalar products.
(a) T : S = (T
ij
g
i
g
j
) :
_
S
kl
g
k
g
l
_
=
(b) T : S = (T
ij
g
i
g
j
) :
_
S
kl
g
k
g
l
_
=
(c) T : S =
_
T
.j
i
g
i
g
j
_
:
_
S
.l
k
g
k
g
l
_
=
(d) T : 1 = (T
ij
g
i
g
j
) :
_

l
k
g
k
g
l
_
=
(e) 1 : 1 =
_

j
i
g
i
g
j
_
:
_

l
k
g
k
g
l
_
=
(f) T : g = (T
ij
g
i
g
j
) :
_
g
kl
g
k
g
l
_
=
(g) T : T = (T
ij
g
i
g
j
) :
_
T
kl
g
k
g
l
_
=
(h) T : T
T
= (T
ij
g
i
g
j
) :
_
T
kl
g
k
g
l
_
T
=
3. Compute the various products.
(a) (TS) v = TSv =
_
T
.j
i
g
i
g
j
_ _
S
.l
k
g
k
g
l
_
(v
m
g
m
) =
(b) (T : S) v = T : Sv =
_
T
.j
i
g
i
g
j
_
:
_
S
.l
k
g
k
g
l
_
(v
m
g
m
) =
(c) tr
_
TT
T
_
=
_

j
i
g
i
g
j
_
:
_
_
T
kl
g
k
g
l
_
(T
mn
g
m
g
n
)
T
_
=
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
192 Chapter 6. Exercises
6.4.5 Solutions
1. Compute the tensor products.
(a) TS = (T
ij
g
i
g
j
)
_
S
kl
g
k
g
l
_
= T
ij
S
kl
g
jk
g
i
g
l
= T
ij
S
j
.l
g
i
g
l
= R
il
g
i
g
l
= R
(b) TS = (T
ij
g
i
g
j
)
_
S
kl
g
k
g
l
_
= T
ij
S
kl

j
k
g
i
g
l
= T
ij
S
jl
g
i
g
l
= R
.l
i
g
i
g
l
= R
(c) TS =
_
T
.j
i
g
i
g
j
_ _
S
.l
k
g
k
g
l
_
= T
.j
i
S
.l
k

k
j
g
i
g
l
= T
.j
i
S
.l
j
g
i
g
l
= R
.l
i
g
i
g
l
= R
(d) TS = (T
ij
g
i
g
j
)
_
S
.l
k
g
k
g
l
_
= T
ij
S
.l
k
g
jk
g
i
g
l
= T
ij
S
jl
g
i
g
l
= R
.l
i
g
i
g
l
= R
(e) T1 = (T
ij
g
i
g
j
)
_

l
k
g
k
g
l
_
= T
ij

l
k
g
jk
g
i
g
l
= T
ij
g
jl
g
i
g
l
= T
ij
g
i
g
j
= T
(f) 11 =
_

j
i
g
i
g
j
_ _

l
k
g
k
g
l
_
=
j
i

l
k

k
j
g
i
g
l
=
l
i
g
i
g
l
=
j
i
g
i
g
j
= 1
(g) Tg = (T
ij
g
i
g
j
)
_
g
kl
g
k
g
l
_
= T
ij
g
kl
g
jk
g
i
g
l
= T
ij

j
l
g
i
g
l
= T
ij
g
i
g
j
= T
(h) TT = (T
ij
g
i
g
j
)
_
T
kl
g
k
g
l
_
= T
ij
T
kl
g
jk
g
i
g
l
= T
ij
T
j
.l
g
i
g
l
= T
2
(i) TT
T
= (T
ij
g
i
g
j
)
_
T
kl
g
k
g
l
_
T
= (T
ij
g
i
g
j
)
_
T
kl
g
l
g
k
_
= T
ij
T
kl
g
jl
g
i
g
k
= T
ij
T
.j
k
g
i
g
k
= T
ij
T
.j
l
g
i
g
l
, or
TT
T
= (T
ij
g
i
g
j
)
_
T
kl
g
k
g
l
_
T
= (T
ij
g
i
g
j
)
_
T
lk
g
k
g
l
_
= T
ij
T
lk
g
jk
g
i
g
l
= T
ij
T
.j
l
g
i
g
l
2. Compute the scalar products.
(a) T : S = (T
ij
g
i
g
j
) :
_
S
kl
g
k
g
l
_
= T
ij
S
kl
g
ik
g
jl
= T
ij
S
ij
=
(b) T : S = (T
ij
g
i
g
j
) :
_
S
kl
g
k
g
l
_
= T
ij
S
kl

i
k

j
l
= T
ij
S
ij
=
(c) T : S =
_
T
.j
i
g
i
g
j
_
:
_
S
.l
k
g
k
g
l
_
= T
.j
i
S
.l
k
g
ik
g
jl
= T
.j
i
S
i
.j
= , or
T
.j
i
S
.l
k
g
ik
g
jl
= T
il
S
il
= , or
T
.j
i
S
.l
k
g
ik
g
jl
= T
kj
S
kj
=
(d) T : 1 = (T
ij
g
i
g
j
) :
_

l
k
g
k
g
l
_
= T
ij

l
k
g
ik

j
l
= T
ij
g
ij
= T
.i
i
= tr T
(e) 1 : 1 =
_

j
i
g
i
g
j
_
:
_

l
k
g
k
g
l
_
=
j
i

l
k
g
ik
g
jl
=
j
i

i
j
=
i
i
= 3 = tr 1
(f) T : g = (T
ij
g
i
g
j
) :
_
g
kl
g
k
g
l
_
= T
ij
g
kl
g
ik
g
jl
= T
ij

i
l
g
jl
= T
ij
g
ji
= T
.i
i
= tr T
(g) T : T = (T
ij
g
i
g
j
) :
_
T
kl
g
k
g
l
_
= T
ij
T
kl
g
ik
g
jl
= T
ij
T
ij
= tr
_
TT
T
_
(h) T : T
T
= (T
ij
g
i
g
j
) :
_
T
kl
g
k
g
l
_
T
= (T
ij
g
i
g
j
) :
_
T
kl
g
l
g
k
_
= T
ij
T
kl
g
il
g
jk
= T
ij
T
ji
= tr (T)
2
, or
T : T
T
= (T
ij
g
i
g
j
) :
_
T
kl
g
k
g
l
_
T
= (T
ij
g
i
g
j
) :
_
T
lk
g
k
g
l
_
= T
ij
T
lk
g
ik
g
jl
= T
ij
T
ji
= tr (T)
2
3. Compute the various products.
(a) (TS) v = TSv =
_
T
.j
i
g
i
g
j
_ _
S
.l
k
g
k
g
l
_
(v
m
g
m
)
= T
.j
i
S
.l
k

k
j
v
m
(g
i
g
l
) g
m
= T
.k
i
S
.l
k
v
m

m
l
g
i
= T
.k
i
S
.l
k
v
l
g
i
= u
i
g
i
= u
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.4. Various Products of Second Order Tensors 193
(b) (T : S) v = T : Sv =
_
T
.j
i
g
i
g
j
_
:
_
S
.l
k
g
k
g
l
_
(v
m
g
m
)
= T
.j
i
S
.l
k
g
ik
g
jl
v
m
g
m
= T
kj
S
kj
v
m
g
m
= v = w
(c) tr
_
TT
T
_
=
_

j
i
g
i
g
j
_
:
_
_
T
kl
g
k
g
l
_
(T
mn
g
m
g
n
)
T
_
=
_

j
i
g
i
g
j
_
:
_
T
kl
T
.l
m
g
k
g
m

=
j
i
T
kl
T
.l
m
g
ik

m
j
=
j
i
T
i
.l
T
.l
j
= T
j
.l
T
.l
j
= T : T
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
194 Chapter 6. Exercises
6.5 Deformation Mappings
6.5.1 Tensors of the Tangent Mappings
The material deformation gradient F
X
is given by
F
X
:= Grad
X
= g
i
G
i
, (6.5.1)
the local geometry gradient K

is given by
K

:= GRAD


= G
i
Z
i
, (6.5.2)
and the local deformation gradient F

is given by
F

:= GRAD

= g
i
Z
i
. (6.5.3)
The different tangent mappings of the various tangent mappings are given by
F
X
= g
i
G
i
, F
T
X
= G
i
g
i
, F
1
X
= G
i
g
i
, F
T
X
= g
i
G
i
, (6.5.4)
K

= G
i
Z
i
, K
T

= Z
i
G
i
, K
1

= Z
i
G
i
, K
T

= G
i
Z
i
, (6.5.5)
F

= g
i
Z
i
, F
T

= Z
i
g
i
, F
1

= Z
i
g
i
, F
T

= g
i
Z
i
. (6.5.6)
The identity tensors are introduced separately for the various coordinate system by
identity tensor of the parameter space - 1

:= Z
i
Z
i
, (6.5.7)
identity tensor of the undeformed space - 1
X
:= G
i
G
i
, (6.5.8)
identity tensor of the deformed space - 1
x
:= g
i
g
i
. (6.5.9)
The various metric tensors of the different tangent spaces are introduced by
local metric tensor of the undeformed body - M

= K
T

= G
ij
Z
i
Z
j
, (6.5.10)
local metric tensor of the deformed body - m

= F
T

= g
ij
Z
i
Z
j
, (6.5.11)
material metric tensor of the undeformed body - M
X
= 1
T
X
1
X
= G
ij
G
i
G
j
, (6.5.12)
material metric tensor of the deformed body - m
X
= F
T
X
F
X
= g
ij
G
i
G
j
, (6.5.13)
spatial metric tensor of the undeformed body - M
x
= F
T
X
F
1
X
= G
ij
g
i
g
j
, (6.5.14)
spatial metric tensor of the undeformed body - m
x
= 1
T
x
1
x
= g
ij
g
i
g
j
. (6.5.15)
The local strain tensors is given by
E

:=
1
2
(m

) =
1
2
(g
ij
G
ij
) Z
i
Z
j
, (6.5.16)
the material strain tensors is given by
E
X
:=
1
2
(m
X
M
X
) =
1
2
(g
ij
G
ij
) G
i
G
j
, (6.5.17)
and nally the spatial strain tensors is given by
E
x
:=
1
2
(m
x
M
x
) =
1
2
(g
ij
G
ij
) g
i
g
j
. (6.5.18)
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.5. Deformation Mappings 195
6.5.2 Exercises
1. Compute the tensor products. What is represented by them?
(a) K
1

K
T

=
(b) F
1

F
T

=
(c) F
X
F
T
X
=
(d) F
1
X
F
T
X
=
(e) 1
X
1
T
X
=
(f) 1
x
1
T
x
=
2. Compute the tensor products in index notation, and name the result with the correct name.
(a) K
T

K
1

=
(b) K
T

M
X
K

=
(c) F
T

F
1

=
(d) F
T

m
x
F

=
(e) F
T
X
E
X
F
1
X
=
(f) F
T
X
E
x
F
X
=
3. Compute the tensor and scalar products in index notation. Rewrite the results in tensor
notation.
(a) B

= M
1

=
(b) B
X
=
(c) B
x
=
(d) B

: 1

=
(e) B
T

: B

=
(f) B
T

B
T

: B

=
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
196 Chapter 6. Exercises
6.5.3 Solutions
1. Compute the tensor products. What is represented by them?
(a) K
1

K
T

= (Z
i
G
i
) (G
j
Z
j
) = G
ij
Z
i
Z
j
= M
1

=
_
K
T

_
1
(b) F
1

F
T

= (Z
i
g
i
) (g
j
Z
j
) = g
ij
Z
i
Z
j
= m
1

=
_
F
T

_
1
(c) F
X
F
T
X
= (g
i
G
i
) (G
j
g
j
) = G
ij
g
i
g
j
= M
1
x
=
_
F
T
X
F
1
X
_
1
(d) F
1
X
F
T
X
= (G
i
g
i
) (g
j
G
j
) = g
ij
G
i
G
j
= m
1
X
=
_
F
T
X
F
X
_
1
(e) 1
X
1
T
X
= (G
i
G
i
) (G
j
G
j
) = G
ij
G
i
G
j
= M
1
X
(f) 1
x
1
T
x
= (g
i
g
i
) (g
j
g
j
) = g
ij
g
i
g
j
= m
1
x
2. Compute the tensor products in index notation, and name the result with the correct name.
(a) K
T

K
1

=
=
_
G
k
Z
k
_
(G
ij
Z
i
Z
j
)
_
Z
l
G
l
_
= G
ij

i
k

j
l
_
G
k
G
l
_
= G
ij
G
i
G
j
= M
X
(b) K
T

M
X
K

=
=
_
Z
k
G
k
_
(G
ij
G
i
G
j
)
_
G
l
Z
l
_
= G
ij

i
k

j
l
_
Z
k
Z
l
_
= G
ij
Z
i
Z
j
= M

(c) F
T

F
1

=
=
_
g
k
Z
k
_
(g
ij
Z
i
Z
j
)
_
Z
l
g
l
_
= g
ij

i
k

j
l
_
g
k
g
l
_
= g
ij
g
i
g
j
= m
x
(d) F
T

m
x
F

=
=
_
Z
k
g
k
_
(g
ij
g
i
g
j
)
_
g
l
Z
l
_
= g
ij

i
k

j
l
_
Z
k
Z
l
_
= g
ij
Z
i
Z
j
= m

(e) F
T
X
E
X
F
1
X
=
=
_
g
k
G
k
_
(g
ij
G
ij
) (G
i
G
j
)
_
G
l
g
l
_
= (g
ij
G
ij
)
i
k

j
l
_
g
k
g
l
_
= (g
ij
G
ij
) g
i
g
j
= E
x
(f) F
T
X
E
x
F
X
=
=
_
G
k
g
k
_
(g
ij
G
ij
) (g
i
g
j
)
_
g
l
G
l
_
= (g
ij
G
ij
)
i
k

j
l
_
G
k
G
l
_
= (g
ij
G
ij
) G
i
G
j
= E
X
3. Compute the tensor and scalar products in index notation. Rewrite the results in tensor
notation.
(a) B

= M
1

=
= (G
ij
Z
i
Z
j
)
_
g
lk
Z
l
Z
k
_
= G
ij
g
kl

l
j
Z
i
Z
l
= G
ij
g
jk
Z
i
Z
k
(b) B
X
= M
1
X
m
X
=
= (G
ij
G
i
G
j
)
_
g
lk
G
l
G
k
_
= G
ij
g
kl

l
j
G
i
G
l
= G
ij
g
jk
G
i
G
k
(c) B
x
= M
1
x
m
x
=
= (G
ij
g
i
g
j
)
_
g
lk
g
l
g
k
_
= G
ij
g
kl

l
j
g
i
g
l
= G
ij
g
jk
g
i
g
k
(d) B

: 1

=
=
_
G
ij
g
jk
Z
i
Z
k
_
:
_
Z
l
Z
l
_
= G
ij
g
jk
Z
il
Z
kl
= G
ij
g
jk

k
i
= G
ij
g
ji
= tr B

TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003


6.5. Deformation Mappings 197
(e) B
T

: B

=
=
_
G
ij
g
jk
Z
k
Z
i
_
:
_
G
lm
g
mn
Z
l
Z
n
_
= G
ij
g
jk
G
lm
g
mn

k
l

n
i
= G
ij
g
jk
G
km
g
mi
= tr B
2

(f) B
T

B
T

: B

=
= (G
pr
g
rs
Z
s
Z
p
)
_
G
ij
g
jk
Z
k
Z
i
_
:
_
G
lm
g
mn
Z
l
Z
n
_
=
_
G
pr
g
rs
G
ij
g
jk

k
p
Z
s
Z
i
_
:
_
G
lm
g
mn
Z
l
Z
n
_
=
= (G
pr
g
rs
G
ij
g
jp
Z
s
Z
i
) :
_
G
lm
g
mn
Z
l
Z
n
_
= G
pr
g
rs
G
ij
g
jp
G
lm
g
mn

s
l

n
i
= G
pr
g
rs
G
sm
g
mn
G
nj
g
jp
= tr B
3

TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003


198 Chapter 6. Exercises
6.6 The Moving Trihedron, Derivatives and Space Curves
6.6.1 The Problem
A spiral staircase is given by the sketch in gure (6.13). The relation between the gradient angle
e
3
_
e
1
'
e
2


r
bottom, xed support top, height h/2
Figure 6.13: The given spiral staircase.
and the overall height h of the spiral staircase is given by
tan =
h
2r
,
if h is the height of a 360
o
spiral staircase, here the spiral staircase is just about 180
o
. The spiral
staircase has a xed support at the bottom, and the central line is represented by the variable
1
,
which starts at the top of the spiral staircase.
Compute the tangent t, the normal n, and the binormal vector b w.r.t. the variable .
Determine the curvature =
1

and the torsion =


1

of the curve w.r.t. to the variable .


Compute the Christoffel symbols

r
i1
, for i, r = 1, 2, 3.
Describe the forces and moments in a sectional area w.r.t. to the basis given by the moving
trihedron, with the following conditions,
M = M
i
a
i
, resp. N = N
i
a
i
, with M
i
=

M
i
() , resp. N
i
=

N
i
()
a
1
, a
2
, a
3
= t, n, b .
Compute the resulting forces and moments at an sectional area given by = 130
o
. Con-
sider a load vector given in the global Cartesian coordinate system by

R = qr e
3
,
at a point S given by the angle

2
, and the radius r
S
. This load maybe a combination of the
self-weight of the spiral staircase and the payload of its usage.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.6. The Moving Trihedron, Derivatives and Space Curves 199
6.6.2 The Base vectors
The winding up of the spiral staircase is given by the sketch in gure (6.14). With the Pythagoras

1
angle between zero and a half rotation
0
1

r
cos

r

`
h
2
= a

Figure 6.14: The winding up of the given spiral staircase.


theorem, see also the sketch in gure (6.14), the relationship between the variable and the
variable
1
along the central line is given by

1
=
_
a
2

2
+ r
2

2
, and a =
h
2
.
This implies
=
1

a
2
+ r
2

1
,
and with the denition of the cosine
cos =
r

1
=
r

a
2
+ r
2
,
nally the relationship between the variables , and
1
is given by
=
cos
r

1
= c
1
, with c =
cos
r
=
1

r
2
+ a
2
.
With this relation it is easy to see, that every expression depending on is also depending on
1
,
this is later on important for computing some derivatives. Any arbitrary point on the central line
of the spiral staircase could be represented by a vector of position x in the Cartesian coordinate
system,
x = x
i
e
i
, and x
i
= x
i
_

1
_
,
or
x = x
i
e
i
, and x
i
= x
i
(r, ) .
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
200 Chapter 6. Exercises
The three components of the vector of position x are given by
x
1
= r cos ,
x
2
= r sin ,
x
3
=
h
2
_
1

_
,
and the complete vector is given by
x = x
i
e
i
= x
i
(r, ) e
i
= (r cos ) e
1
+ (r sin ) e
2
+
h
2
_
1

_
e
3
, and = c
1
.
The tangent vector t = a
1
is dened by
t = a
1
=
dx
d
1
=
x

1
,
and with
d
d
1
= c this implies
t = a
1
=
_
_
r sin
r cos

h
2
_
_
c t = a
1
=
_
_
cr sin
cr cos
ca
_
_
.
The absolute value of this vector is given by
[t[ = [a
1
[ = c
_
r
2
sin
2
+ r
2
cos
2
+ a
2
= c

r
2
+ a
2
= 1,
i.e. the tangent vector t is already an unit vector! For the normal unit vector n = a
2
rst the
normal vector n

is computed by
n

=
dt
d
1
=
da
1
d
1
=
a
1

1
=
d
2
x
d
2

1
,
and this implies with the result for vector a
1
,
n

= c
2
r
_
_
cos
sin
0
_
_
.
The absolute value of this vector is given by
[n

[ = c
2
r
_
cos
2
+ sin
2
= c
2
r =
1

,
and with this nally the normal unit vector is given by
n = a
2
= n

=
_
_
cos
sin
0
_
_
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.6. The Moving Trihedron, Derivatives and Space Curves 201
The binormal vector b = a
3
is dened by
b = t n , resp. a
3
= a
1
a
2
,
and with the denition of the cross product, represented by the expansion about the rst column,
the binormal is given by
b =

e
1
cr sin cos
e
2
cr cos sin
e
3
ca 0

=
_
_
ca sin
ca cos
cr
_
_
.
The absolute value of this vector is given by
[b[ = c
_
a
2
sin
2
+ a
2
cos
2
+ r
2
= c

a
2
+ r
2
= 1,
and with this the binormal vector b is already an unit vector given by
b = a
3
=
c
2
_
_
hsin
hcos
2r
_
_
b = a
2
=
_
_
ca sin
ca cos
cr
_
_
.
6.6.3 The Curvature and the Torsion
The curvature =
1

of a curve in space is given by


=
1

= [n

[ , and n

=
dt
d
1
=
da
1
d
1
=
d
2
x
d
2

1
,
and in this case it implies a constant valued curvature
=
1

= [n

[ = c
2
r = constant.
The torsion =
1

of a curve in space is dened by the rst derivative of the binormal vector,


b = t n , with b
,1
= n =
1

n,
and here the derivative
b
,1
=
_
_
c
2
a cos
c
2
a sin
0
_
_
= c
2
an,
implies a constant torsion, too,
=
1

= c
2
a = constant.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
202 Chapter 6. Exercises
6.6.4 The Christoffel Symbols
The Christoffel symbols are dened by the derivatives of the base vectors, here the moving
trihedron given by a
1
, a
2
, a
3
= t, n, b,
a
i,1
=
r
i1
a
r
[ a
k
a
i,1
a
k
=
r
i1
a
r
a
k
=
r
i1

k
r
and nally the denition of a single Christoffel symbol is given by

k
i1
= a
i,1
a
k
.
This denition implies, that it is only necessary to compute the scalar products of the base vectors
and their rst derivatives, in order to determine the Christoffel symbols. For this reason in a rst
step all the rst derivatives w.r.t. the variable of the base vectors of the moving trihedron are
determined. The rst derivative of the base vector a
1
is given by
a
1,1
=
a
1

1
=
_
_
cr cos
cr sin
0
_
_
c = c
2
r
_
_
cos
sin
0
_
_
= c
2
ra
2
,
the rst derivative of the second base vector a
2
is given by
a
2,1
=
a
2

1
= c
_
_
sin
cos
0
_
_
,
and nally the rst derivative of the third base vector a
3
is given by
a
3,1
=
a
3

1
= c
_
_
ca cos
ca sin
0
_
_
= c
2
a
_
_
cos
sin
0
_
_
= c
2
aa
2
.
Because the moving trihedron a
i
is an orthonormal basis, it is not necessary to differentiate
between the co- and contravariant base vectors, i.e. a
i
= a
i
, and with this the denition of the
Christoffel symbols is given by

k
i1
= a
i,1
a
k
= a
i,1
a
k
.
The various Christoffel symbols are computed like this,

1
11
= a
1,1
a
1
= c
2
ra
2
a
1
= 0,

2
11
= a
1,1
a
2
= c
2
ra
2
a
2
= c
2
r,

3
11
= a
1,1
a
3
= c
2
ra
2
a
3
= 0,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.6. The Moving Trihedron, Derivatives and Space Curves 203

1
21
= a
2,1
a
1
= c
_
_
sin
cos
0
_
_

_
_
cr sin
cr cos
ca
_
_
= c
2
r,

2
21
= a
2,1
a
2
= c
_
_
sin
cos
0
_
_

_
_
cos
sin
0
_
_
= 0,

3
21
= a
2,1
a
3
= c
_
_
sin
cos
0
_
_

_
_
ca sin
ca cos
cr
_
_
= c
2
a,

1
31
= a
3,1
a
1
=
_
_
c
2
a cos
c
2
a sin
0
_
_

_
_
cr sin
cr cos
ca
_
_
= 0,

2
31
= a
3,1
a
2
=
_
_
c
2
a cos
c
2
a sin
0
_
_

_
_
cos
sin
0
_
_
= c
2
a,

3
31
= a
3,1
a
3
=
_
_
c
2
a cos
c
2
a sin
0
_
_

_
_
ca sin
ca cos
cr
_
_
= 0.
With this results the coefcient matrix of the Christoffel symbols could be represented by
[
r
i1
] =
_
_
0 c
2
r 0
c
2
r 0 c
2
a
0 c
2
a 0
_
_
.
6.6.5 Forces and Moments at an Arbitrary sectional area
An arbitrary line element of the spiral staircase is given by the points P, and Q. These points
are represented by the vectors of position x, and x + dx. At the point P the moving trihedron is
given by the orthonormal base vectors t, n, and b. The forces N, N+ dN, and the moments
M, M+dMat the sectional areas are given like in the sketch of gure (6.15). The line element
is load by a vector f d
1
. The equilibrium of forces in vector notation is given by
N+ dNN+f d
1
= 0 dN+f d
1
= 0,
with the rst derivative of the force vector w.r.t. to the variable
1
represented by
dN =
N

1
d
1
= N
,1
d
1
,
the equilibrium condition becomes
(N
,1
+f ) d
1
= 0.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
204 Chapter 6. Exercises
_
'

e
3
e
2
e
1
O
P
Q
_

t = a
1
(
1
)

n = a
2
(
1
)

b = a
3
(
1
)
z
N
`
M

N+ dN

M+ dM

f d
1

x = x(
1
)

x + dx = x(
1
) +a
1
d
1

dx = a
1
d
1
Figure 6.15: An arbitrary line element with the forces, and moments in its sectional areas.
This equation rewritten in index notation, with the force vector N = N
i
a
i
given in the basis of
the moving trihedron at point P, implies
_
_
N
i
a
i
_
,1
+ f
i
a
i
_
d
1
= 0,
with the chain rule,
N
i
,1
a
i
+ N
i
a
i,1
+ f
i
a
i
= 0
N
i
,1
a
i
+ N
i

k
i1
a
k
+ f
i
a
i
= 0
and after renaiming the dummy indices,
_
N
i
,1
+ N
k

i
k1
_
a
i
+ f
i
a
i
= 0.
With the covariant derivative dened by
N
i
,1
+ N
k

i
k1
= N
i
[
1
,
the equilibrium condition could be rewritten in index notation only for the components,
N
i
[
1
+f
i
= N
i
,1
+ N
k

i
k1
+ f
i
= 0.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.6. The Moving Trihedron, Derivatives and Space Curves 205
This equation system represents three component equations, one for each direction of the basis
of the moving trihedron, rst for i = 1, and k = 1, . . . , 3,
N
1
,1
+ N
1

1
11
+ N
2

1
21
+ N
3

1
31
+ f
1
= 0,
with the computed values for the Christoffel symbols from above,
N
1
,1
+ 0 + N
2
_
c
2
r
_
+ 0 + f
1
= 0,
and nally
N
1
,1
c
2
rN
2
+ f
1
= 0.
The second case for i = 2, and k = 1, . . . , 3 implies
N
2
,1
+ N
1

2
11
+ N
2

2
21
+ N
3

2
31
+ f
2
= 0,
N
2
,1
+ N
1
_
c
2
r
_
+ 0 + N
3
_
c
2
a
_
+ f
2
= 0,
N
2
,1
+ c
2
rN
1
+ c
2
aN
3
+ f
2
= 0.
The third case for i = 3, and k = 1, . . . , 3 implies
N
3
,1
+ N
1

3
11
+ N
2

3
21
+ N
3

3
31
+ f
3
= 0,
N
3
,1
+ 0 + N
2
_
c
2
a
_
+ 0 + f
3
= 0,
N
3
,1
c
2
aN
2
+ f
3
= 0.
All together the coefcient scheme of the equilibrium of forces in the basis of the moving trihe-
dron is given by
_
_
N
1
,1
c
2
rN
2
+ f
1
N
2
,1
+ c
2
rN
1
+ c
2
aN
3
+ f
2
N
3
,1
c
2
aN
2
+ f
3
_
_
=
_
_
0
0
0
_
_
.
The equilibrium of moments in vector notation w.r.t. the point P is given by
M+M+ dM+a
1
d
1
(N+ dN) +
1
2
a
1
d
1
f d
1
= 0,
dM+a
1
d
1
N+a
1
dNd
1
+
1
2
a
1
f d
1
d
1
= 0,
and in linear theory, i.e. neglecting the higher order terms, e.g. terms with dNd
1
, and with
d
1
d
1
, the equilibrium of moments is given by
dM+a
1
d
1
N = 0.
With the rst derivative of the moment vector w.r.t. to the variable
1
given by
dM =
M

1
d
1
= M
,1
d
1
,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
206 Chapter 6. Exercises
the equilibrium condition becomes
(M
,1
+a
1
N) d
1
= 0.
The cross product of the rst base vector a
1
of the moving trihedron and the force vector N is
given by
a
1
N = a
1

_
N
i
a
i
_
= N
1
a
1
a
1
+ N
2
a
1
a
2
+ N
3
a
1
a
3
= 0 + N
2
a
3
+ N
3
_
a
2
_
= N
2
a
3
N
3
a
2
,
a
1
N = N
2
a
3
N
3
a
2
,
because the base vectors a
i
form an orthonormal basis. The following steps are the same as the
ones for the force equilibrium equations,
_
M
,1
+ N
2
a
3
N
3
a
2
_
d
1
= 0,
M
i
,1
a
i
+ M
i
a
i,1
+ N
2
a
3
N
3
a
2
= 0,
and nally
_
M
i
,1
+ M
k

i
k1
_
a
i
+ N
2
a
3
N
3
a
2
= M
i
[
1
a
i
+ N
2
a
3
N
3
a
2
= 0.
Again this equation system represents three equations, one for each direction of the basis of the
moving trihedron, rst for i = 1, i.e. in the direction of the base vector a
1
, and k = 1, . . . , 3,
M
1
,1
+ M
1

1
11
+ M
2

1
21
+ M
3

1
31
= 0,
M
1
,1
+ 0 + M
2
_
c
2
r
_
+ 0 = 0,
M
1
,1
c
2
rM
2
= 0,
in direction of the base vector a
2
, i.e. for i = 2, and k = 1, . . . , 3,
M
2
,1
+ M
1

2
11
+ M
2

2
21
+ M
3

2
31
N
3
= 0,
M
2
,1
+ M
1
_
c
2
r
_
+ 0 + M
3
_
c
2
a
_
N
3
= 0,
M
2
,1
+ c
2
rM
1
+ c
2
aM
3
N
3
= 0.
and nally in the direction of the third base vector a
3
, i.e. i = 3, and k = 1, . . . , 3,
M
3
,1
+ M
1

3
11
+ M
2

3
21
+ M
3

3
31
+ N
2
= 0,
M
3
,1
+ 0 + M
2
_
c
2
a
_
+ 0 + N
2
= 0,
M
3
,1
c
2
aM
2
+ N
2
= 0.
All together the coefcient scheme of the equilibrium of moments in the basis of the moving
trihedron is given by
_
_
M
1
,1
c
2
rM
2
M
2
,1
+ c
2
rM
1
+ c
2
aM
3
N
3
M
3
,1
c
2
aM
2
+ N
2
_
_
=
_
_
0
0
0
_
_
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.6. The Moving Trihedron, Derivatives and Space Curves 207
6.6.6 Forces and Moments for the Given Load
The unkown forces Nand moments Mw.r.t. the natural basis a
i
should be computed, i.e.
N = N
i
a
i
, and M = M
i
a
i
.
The load f is given in the global, Cartesian coordinate system by
f =

f
i
e
i
,
and is acting at a point S, given by an angle

2
, and a radius r
S
, see also the free-body dia-
gram given by gure (6.16). The free-body diagram and the load vector are given in the global
e
3
_
e
1
'
e
2
S

R
3
_

R
1
'

R
2
T

N
3

M
3 _

N
1
_ _

M
1
'

N
2
'
'

M
2
-
r
S


r

R
bottom, xed support top, height h/2
Figure 6.16: The free-body diagram of the loaded spiral staircase.
Cartesian basis, i.e. the load vector is given by

R =

R
i
e
i
, with
_
_

R
1

R
2

R
3
_
_
=
_
_
0
0
qr
_
_
.
First the equilibrium conditions of forces in the directions of the base vectors e
i
of the global
Cartesian coordinate system are established,

F
e1
= 0

N
1
+

R
1
= 0

N
1
= 0,

F
e2
= 0

N
2
+

R
2
= 0

N
2
= 0,

F
e3
= 0

N
3
+

R
3
= 0

N
3
=

R
3
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
208 Chapter 6. Exercises
Than the equilibrium conditions of moments in the directions of the base vectors e
i
of the global
Cartesian coordinate system w.r.t. the point T are established

M
(T)
e1
= 0

R
3
_
r sin
_


2
_
+ r
S
sin

2
_
+

M
1
= 0


M
1
=

R
3
_
r sin
_


2
_
r
S
sin

2
_
,

M
(T)
e2
= 0

R
3
_
r
S
cos

2
+ r cos
_


2
__
+

M
2
= 0


M
2
=

R
3
_
r
S
cos

2
r cos
_


2
__
,

M
(T)
e3
= 0

M
3
= 0.
Finally the resulting equations of equilibrium are given by

N =
_
_
0
0

R
3
_
_
, and

M =
_
_

R
3
_
r sin
_


2
_
r
S
sin

2
_

R
3
_
r
S
cos

2
r cos
_


2
__
0
_
_
.
Now the problem is, that the equilibrium conditions are given in the global Cartesian coordinate
system, but the results should be descirbed in the basis of the moving trihedron. For this reason it
is necessary to transform the results from above, i.e. N =

N
i
e
i
, and M =

M
i
e
i
, into N = N
i
a
i
,
and M = M
i
a
i
! The Cartesian basis e
i
should be transformed by a tensor S into the basis a
i
,
S = S
rs
e
r
e
s
a
i
= Se
i
= S
rs

si
e
r
= S
r
.i
e
r
,
i.e. in matrix notation
[a
i
] = [S
r
.i
]
T
[e
r
] , with [S
r
.i
] =
_
_
cr sin cos ca sin
cr cos sin ca cos
ca 0 cr
_
_
,
see also the denitions for the base vectors of the moving trihedron in the sections above! Then
the retransformation from the basis of the moving trihedron into the global Cartesian basis should
be given by S
1
= T, with
e
i
= Ta
i
, with T = T
r
.s
a
r
a
s
,
e
i
= (T
r
.s
a
r
a
s
) a
i
= T
r
.s

s
i
a
r
= T
r
.i
a
r
.
Comparing this transformation relations implies
a
i
= S
r
.i
e
r
= S
r
.i
T
m
.r
a
m
[ a
k
,

k
i
= S
r
.i
T
m
.r

k
m
,

k
i
= T
k
.r
S
r
.i
,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.6. The Moving Trihedron, Derivatives and Space Curves 209
and in matrix notation
_

k
i

=
_
T
k
.r

[S
r
.i
]
_
T
k
.r

= [S
r
.i
]
1
.
Because both basis, i.e. e
i
, and a
i
, are orthonormal basis, the tensor T must describe an or-
thonormal rotation! The tensor of an orthonormal rotation is characterized by
T
1
= T
T
, resp. T = S
1
= S
T
,
i.e. in matrix notation
[T
r
.i
] = [S
r
.i
]
1
= [S
r
.i
]
T
=
_
_
cr sin cr cos ca
cos sin 0
ca sin ca cos cr
_
_
.
With this the relations between the base vectors of the different basis could be given by
a
i
= S
r
.i
e
r
, resp. e
r
= T
k
.r
a
k
,
and e.g. in detail
[e
r
] =
_
T
k
.r

T
[a
k
] =
_
_
cr sin cos ca sin
cr cos sin ca cos
ca 0 cr
_
_
_
_
a
1
a
2
a
3
_
_
,
or
e
1
= cr sin a
1
cos a
2
ca sin a
3
e
2
= cr cos a
1
sin a
2
+ ca cos a
3
e
3
= ca a
1
+ cr a
3
.
With this it is easy to compute the nal results, i.e. to transform N =

N
i
e
i
, and M =

M
i
e
i
, into
N = N
i
a
i
, and M = M
i
a
i
. With the known transformation e
i
= T
k
.i
a
k
the force vector could
be represented by
N =

N
i
e
i
=

N
i
T
k
.i
a
k
= N
k
a
k
.
Comparing only the coefcients implies
N
k
= T
k
.i

N
i
,
and with this the coefcients of the force vector Nw.r.t. the basis of the moving trihedron in the
sectional area at point T are given by
_
_
N
1
N
2
N
3
_
_
=
_
_
cr sin cr cos ca
cos sin 0
ca sin ca cos cr
_
_
_
_
0
0

R
3
_
_

_
_
N
1
N
2
N
3
_
_
=
_
_

R
3
ca
0

R
3
cr
_
_
.
By the analogous comparison the coefcients of the moment vector M w.r.t. the basis of the
moving trihedron in the sectional area at point T are given by
_
_
M
1
M
2
M
3
_
_
=
_
_
cr sin cr cos ca
cos sin 0
ca sin ca cos cr
_
_
_
_

M
1

M
2
0
_
_

_
_
M
1
M
2
M
3
_
_
=
_
_
cr
_


M
1
sin +

M
2
cos
_

_

M
1
cos +

M
2
sin
_
ca
_


M
1
sin +

M
2
cos
_
_
_
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
210 Chapter 6. Exercises
6.7 Tensors, Stresses and Cylindrical Coordinates
6.7.1 The Problem
The given cylinderical shell is described by the parameter lines
i
, with i = 1, 2, 3. The relations
_
e
3
'
e
1

e
2
(0, 0, 0)

(0, 2, 0)

(8, 2, 0)

(8, 0, 0)

(8, 0, 4)

1
_

3
P

g
3

g
1

g
2

x

2

3

3

2
.

4
[cm]
Figure 6.17: The given cylindrical shell.
between the Cartesian coordinates and the parameters, i.e. the curvilinear coordinates, are given
by the vector of position x,
x
1
=
_
5
2
_
sin
_

1
_
,
x
2
=
3
, and
x
3
=
_
5
2
_
cos
_

1
_
,
where a = 8.0 is a constant length. At the point P dened by
P = P
_

1
=
8
10
;
2
=
2
10
;
3
= 2
_
,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.7. Tensors, Stresses and Cylindrical Coordinates 211
the stress tensor (, for the geometrically linear theory,) is given by
=
ij
g
i
g
j
, with
_

ij

=
_
_
6 0 2
0 0 0
2 0 5
_
_
.
The vector of position x in the Cartesian coordinate system is given by
x = x
i
e
i
,
and the normal vector n in the Cartesian coordinate system at the point P is given by
n =
g
1
+g
3
[g
1
+g
3
[
= n
r
e
r
.
Compute the covariant base vectors g
i
and the contravariant base vectors g
i
at the point P.
Determine the coefcients of the tensor w.r.t. the basis in mixed formulation,
_
g
i
g
k
_
,
and w.r.t. the Cartesian basis, (e
i
e
k
).
Work out the physical components from the contravariant stress tensor.
Determine the invariants,
I

= tr ,
II

=
1
2
_
(tr )
2
tr ()
2
_
,
III

= det ,
for the three different representations of the stress tensor .
Calculate the principal stresses and the principal stress directions.
Compute the specic deformation energy W
spec
=
1
2
: at the point P, with
=
1
100
(g
ik

ik
) g
i
g
k
.
Determine the stress vector t
n
at the point P w.r.t. the sectional area given by the normal
vector n. Furthermore calculate the normal stress t

, and the resulting shear stress t

for
this direction.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
212 Chapter 6. Exercises
6.7.2 Co- and Contravariant Base Vectors
The vector of position x is given w.r.t. the Cartesian base vectors e
i
. The covariant base vectors
g
i
of the curvilinear coordinate system are dened by the rst derivatives of the vector of position
w.r.t. the parameters
i
of the curvilinear coordinate lines in space, i.e.
g
k
=
x

k
=
x
i

k
e
i
.
With this denition the covariant base vectors are computed by
g
1
=
x

1
=
_
5
2
_

a
cos
_

1
_
e
1
+ 0 e
2
+
_
5
2
_

a
sin
_

1
_
e
3
,
g
2
=
x

2
= sin
_

1
_
e
1
+ 0 e
2
+ cos
_

1
_
e
3
,
g
3
=
x

3
= 0 e
1
+ (1) e
2
+ 0 e
3
,
and nally the covariant base vectors of the curvilinear coordinate system are given by
g
1
=
_
5
2
_

a
_
_
cos
_

1
_
0
sin
_

1
_
_
_
, g
2
=
_
_
sin
_

1
_
0
cos
_

1
_
_
_
, and g
3
=
_
_
0
1
0
_
_
,
or by
g
1
=
1
b
_
_
cos c
0
sin c
_
_
, g
2
=
_
_
sin c
0
cos c
_
_
, and g
3
=
_
_
0
1
0
_
_
,
with the abbreviations
b =
a
(5
2
)
=
5
3
, and c =

a

1
=

10
.
In order to determine the contravariant base vectors of the curvilinear coordinate system, it is
necessary to multiply the covariant base vectors with the contravariant metric coefcients. The
contravariant metric coefcients g
ik
could be computed by the inverse of the covariant metric
coefcients g
ik
,
[g
ik
] =
_
g
ik

1
.
So the rst step is to compute the covariant metric coefcients g
ik
,
g
ik
= g
i
g
k
, i.e. [g
ik
] =
_
_
1
b
2
0 0
0 1 0
0 0 1
_
_
.
The relationship between the co- and contravariant metric coefcients is used in its inverse form,
in order to compute the contravariant metric coefcients,
_
g
ik

= [g
ik
]
1
, resp.
_
g
ik

=
_
_
b
2
0 0
0 1 0
0 0 1
_
_
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.7. Tensors, Stresses and Cylindrical Coordinates 213
The contravariant base vectors g
i
are dened by
g
i
= g
ik
g
k
,
and for i = 1, . . . , 3 the relations between co- and contravariant base vectors are given by
g
1
= b
2
g
1
, g
2
= g
2
, and g
3
= g
3
,
and nally with abbreviations
g
1
= b
_
_
cos c
0
sin c
_
_
, g
2
=
_
_
sin c
0
cos c
_
_
, and g
3
=
_
_
0
1
0
_
_
,
or in detail
g
1
=
(5
2
)
a
_
_
cos
_

1
_
0
sin
_

1
_
_
_
, g
2
=
_
_
sin
_

1
_
0
cos
_

1
_
_
_
, and g
3
=
_
_
0
1
0
_
_
.
At the given point P the co- and contravariant base vectors g
i
, and g
i
, are given by
g
1
=
3
5
_
_
cos
_

10
_
0
sin
_

10
_
_
_
, g
1
=
5
3
_
_
cos
_

10
_
0
sin
_

10
_
_
_
,
g
2
= g
2
=
_
_
sin
_

10
_
0
cos
_

10
_
_
_
, and
g
3
= g
3
=
_
_
0
1
0
_
_
.
6.7.3 Coefcients of the Various Stress Tensors
The stress tensor is given by the covariant basis g
i
of the curvilinear coordinate system, and
the contravariant coefcients
ij
. The stress tensor w.r.t. to the mixed basis is determined by
=
im
g
i
g
m
, and g
m
= g
mk
g
k
,
=
im
g
mk
g
i
g
k
=
i
k
g
i
g
k
,
i.e. the coefcient matrix is given by
_

i
k

=
_

im

[g
mk
] =
_
_
6 0 2
0 0 0
2 0 5
_
_
_
_
1
b
2
0 0
0 1 0
0 0 1
_
_
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
214 Chapter 6. Exercises
Solving this matrix product implies the coefcient matrix [
i
k
] w.r.t. the basis g
i
g
k
, i.e. the
stress tensor w.r.t. to the mixed basis is given by
=
i
k
g
i
g
k
, with
_

i
k

=
_
_
6
b
2
0 2
0 0 0
2
b
2
0 5
_
_
,
and nally at the point P the stress tensor w.r.t. the mixed basis is given by
=
i
k
g
i
g
k
, with
_

i
k

=
_
_
54
2
25
0 2
0 0 0
18
2
25
0 5
_
_
.
The relationships between the Cartesian coordinate system and the curvilinear coordinate system
are described by
g
i
= Be
i
, and B = B
mn
e
m
e
n
,
g
i
= (B
mn
e
m
e
n
) e
i
= B
mn

n
i
e
m
= B
mi
e
m
,
and because of the identity of co- and contravariant base vectors in the Cartesian coordinate
system,
g
i
= B
ki
e
k
= B
ki
e
k
.
This equation represented by the coefcient matrices is given by
[g
i
] = [B
ki
]
T
_
e
k

, with [B
ki
] =
_
_
1
b
cos c sin c 0
0 0 1
1
b
sin c cos c 0
_
_
,
see also the denition of the covariant base vectors above. The stress tensor w.r.t. the Cartesian
basis is computed by
=
ik
g
i
g
k
, and g
i
= B
ri
e
r
, resp. g
k
= B
sk
e
s
,
=
ik
B
ri
B
sk
e
r
e
s
,
and with the abbreviation

rs
= B
ri

ik
B
sk
,
the coefcient matrix of the stress tensor w.r.t. the Cartesian basis is dened by
[
rs
] = [B
ri
]
_

ik

[B
sk
]
T
=
_
_
1
b
cos c sin c 0
0 0 1
1
b
sin c cos c 0
_
_
_
_
6 0 2
0 0 0
2 0 5
_
_
_
_
1
b
cos c 0
1
b
sin c
sin c 0 cos c
0 1 0
_
_
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.7. Tensors, Stresses and Cylindrical Coordinates 215
Solving this matrix product implies the coefcient matrix of the stress tensor w.r.t. the Cartesian
basis, i.e. the stress tensor w.r.t. the Cartesian basis is given by
=
ik
e
i
e
k
, with [
rs
] =
_
_
6
b
2
cos
2
c
2
b
cos c
6
b
2
sin c cos c

2
b
cos c 5
2
b
sin c
6
b
2
sin c cos c
2
b
sin c
6
b
2
sin
2
c
_
_
,
and nally at the point P the stress tensor w.r.t. the Cartesian basis is given by
=
ik
e
i
e
k
, with [
rs
] =
_
_
54
2
25
cos
2
10

6
5
cos

10
54
2
25
sin

10
cos

10

6
5
cos

10
5
6
5
sin

10
54
2
25
sin

10
cos

10

6
5
sin

10
54
2
25
sin
2
10
_
_
.
6.7.4 Physical Components of the Contravariant Stress Tensor
The physical components of a tensor are dened by

ik
=
ik

g
(k)(k)
_
g
(i)(i)
,
see also the lecture notes. The physical components

ik
of a tensor =
ik
g
i
g
k
consider,
that the base vectors of an arbitrary curvilinear coordinate system do not have to be unit vectors!
In Cartesian coordinate systems the base vectors e
i
do not inuence the physical value of the
components of the coefcient matrix of a tensor, because the base vectors are unit vectors and
orthogonal to each other. But in general coordinates the base vectors do inuence the physical
value of the components of the coefcient matrix, because they are in general no unit vectors,
and not orthogonal to each other. Here the contravariant stress tensor is given by
=
ij
g
i
g
k
, with
_

ik

=
_
_
6 0 2
0 0 0
2 0 5
_
_
.
In order to compute the physical components of the stress tensor , it is necessary to solve the
denition given above. The numerator and denominator of this denition are given by the square
roots of the co- and contravariant metric coefcients g
(i)(i)
, and g
(i)(i)
, i.e.

g
11
=
1
b

g
22
= 1

g
33
= 1,
_
g
11
= b
_
g
22
= 1
_
g
33
= 1.
Finally the coefcient matrix of the physical components of the contravariant stress tensor =

ik
g
i
g
k
is given by

ik
=
ik

g
(k)(k)
_
g
(i)(i)
, and
_

ik
_
=
_
_
6
b
2
0
2
b
0 0 0
2
b
0 5
_
_
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
216 Chapter 6. Exercises
6.7.5 Invariants
The stress tensor could be described w.r.t. the three following basis, i.e.
=
ik
g
i
g
k
, w.r.t. the covariant basis of the curvilinear coordinate system,
=
i
k
g
i
g
k
, w.r.t. the mixed basis of the curvilinear coordinate system, and
=
ik
e
i
e
k
, w.r.t. the basis of the Cartesian coordinate system.
The rst invariant I

of the stress tensor is dened by the trace of the stress tensor, i.e.
I

= tr = : 1.
The rst invariant w.r.t. the covariant basis of the curvilinear coordinate system is given by
I

= tr =
_

ik
g
i
g
k
_
:
_
g
ml
g
l
g
m
_
=
ik
g
ml

l
i

m
k
=
ik
g
ki
=
i
i
,
I

= tr =
i
i
=
6
b
2
+ 5.
The rst invariant w.r.t. the mixed basis of the curvilinear coordinate system is given by
I

= tr =
_

i
k
g
i
g
k
_
:
_
g
ml
g
l
g
m
_
=
i
k
g
ml

l
i
g
km
=
ik

l
i

l
k
=
i
i
,
I

= tr =
i
i
=
6
b
2
+ 5.
The rst invariant w.r.t. the mixed basis of the Cartesian coordinate system is given by
I

= tr = (
ik
e
i
e
k
) : (
ml
e
m
e
l
) =
ik

ml

im

kl
=
ii
,
I

= tr =
ii
=
6
b
2
+ 5.
The second invariant II

of the stress tensor is dened by the half difference of the trace to the
second of the stress tensor, and the trace of the stress tensor to the second, i.e.
II

=
1
2
_
(tr )
2
tr
2
_
.
First in order to determine tr
2
it is necessary to compute
2
, i.e.

2
=
_

ik
g
i
g
k
_
(
rs
g
r
g
s
) =
ik

rs
g
kr
g
i
g
s
,
and then
tr
2
=
2
: 1 =
_

ik

rs
g
kr
g
i
g
s
_
:
_
g
ml
g
l
g
m
_
=
ik

rs
g
kr
g
ml

l
i

m
s
=
ik

rs
g
kr
g
si
=
i
r

r
i
tr
2
=
_
6
b
2
_
2
+ 2
_
2
2
b
2
_
+ 25 =
36
b
4
+
8
b
2
+ 25,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.7. Tensors, Stresses and Cylindrical Coordinates 217
or in just one step
tr
2
=
T
: =
_

ki
g
i
g
k
_
: (
rs
g
r
g
s
) =
ki

rs
g
ir
g
ks
=
k
r

r
k
.
Finally the second invariant w.r.t. the covariant basis of the curvilinear coordinate system
is given by
II

=
1
2
_
(tr )
2
tr
2
_
=
1
2
_
_
6
b
2
+ 5
_
2

_
36
b
4
+
8
b
2
+ 25
_
_
=
1
2
_
36
b
4
+
60
b
2
+ 25
36
b
4

8
b
2
25
_
,
II

=
26
b
2
.
Again rst in order to determine tr
2
it is necessary to compute
2
, i.e.

2
=
_

i
k
g
i
g
k
_
(
r
s
g
r
g
s
) =
i
k

r
s

k
r
g
i
g
s
=
i
k

k
s
g
i
g
s
,
and then
tr
2
=
2
: 1 =
_

i
k

k
s
g
i
g
s
_
:
_
g
ml
g
l
g
m
_
=
i
k

k
s
g
ml

l
i
g
sm
=
i
k

k
i
,
or in just one step
tr
2
=
T
: =
_

i
k
g
i
g
k
_
: (
r
s
g
r
g
s
) =
i
k

r
s
g
ir
g
ks
=
s
r

r
s
.
This intermediate result is the same like above, and the rst invariants, i.e. the trace tr
and (tr )
2
, too, are equal for the different basis, i.e. all further steps will be the same like
above. Combining all this nally implies, that the second invariant w.r.t. the mixed basis
of the curvilinear coordinate system is given by
II

=
26
b
2
, too.
Again rst in order to determine tr
2
it is necessary to compute
2
, i.e.

2
= (
ik
e
i
e
k
) (
rs
e
r
e
s
) =
ik

rs

kr
e
i
e
s
=
ik

ks
e
i
e
s
,
and then
tr
2
=
2
: 1 = (
ik

ks
e
i
e
s
) : (
lm
e
l
e
m
) =
ik

ks

lm

il

sm
=
ik

ki
,
or in just one step
tr
2
=
T
: = (
ik
e
i
e
k
) : (
rs
e
r
e
s
) =
ki

rs

ir

ks
=
sr

rs
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
218 Chapter 6. Exercises
Solving this implies the same intermediate results,
tr
2
=
36
b
4
cos
4
c + 2
_
4
b
2
cos
2
c
_
+ 2
_
36
b
4
sin
2
c cos
2
c
_
+
_
4
b
2
sin
2
c
_
+
36
b
4
sin
4
c + 25
=
36
b
4
_
cos
4
c + 2 sin
2
c cos
2
c + sin
4
c
_
+
8
b
2
_
cos
2
c + sin
2
c
_
+ 25
tr
2
=
36
b
4
+
8
b
2
+ 25,
i.e. the further steps are the same like above. And with this nally the second invariant
w.r.t. the basis of the Cartesian coordinate system is given by
II

=
26
b
2
, too.
The third invariant III

of the stress tensor is dened by the determinant of the stress tensor, i.e.
III

= det =
1
6
(tr )
3

1
2
(tr )
_
tr
2
_
+
1
3
_

T
_
: .
In order to compute the third invariant it is necessary to solve the three terms in the sum-
mation of the denition of the third invariant. The rst term is given by
1
6
(tr )
3
=
1
6
_
6
b
2
+ 5
_
3
=
1
6
_
216
b
6
+
540
b
4
+
450
b
2
+ 125
_
1
6
(tr )
3
=
36
b
6
+
90
b
4
+
75
b
2
+
125
6
,
and the second term is given by

1
2
(tr )
_
tr
2
_
=
1
2
_
6
b
2
+ 5
__
36
b
4
+
8
b
2
+ 25
_

1
2
(tr )
_
tr
2
_
=
_
108
b
6
+
114
b
4
+
95
b
2
+
125
2
_
.
The third term is not so easy to compute, because it does not include the trace, but a scalar
product and a tensor product with the transpose of the stress tensor, i.e.
1
3
_

T
_
: =
1
3

T
: , with
T
=
T

T
= ()
T
.
or in index notation
_

T
_
: =
_

ki
g
i
g
k
_
(
sr
g
r
g
s
) :
_

lm
g
l
g
m
_
=
_

ki

sr
g
kr
g
i
g
s
_
:
_

lm
g
l
g
m
_
=
_

ki

s
k
g
i
g
s
_
:
_

lm
g
l
g
m
_
=
ki

s
k

lm
g
il
g
sm
=
k
l

s
k

l
s
,
_

T
_
: =
k
l

l
s

s
k
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.7. Tensors, Stresses and Cylindrical Coordinates 219
In order to solve this equation, rst the new tensor
T
is computed by

T
=
T

T
=
_

ki
g
i
g
k
_
(
sr
g
r
g
s
) =
ki

sr
g
kr
g
i
g
s

si
g
i
g
s
=
ki

s
k
g
i
g
s
=
s
k

ki
g
i
g
s
,
and the coefcient matrix of this new tensor is given by
_

si

=
_

is

T
= [
s
k
]
_

ki

,
[
si
] =
_
_
6
b
2
0 2
0 0 0
2
b
2
0 5
_
_
_
_
6 0 2
0 0 0
2 0 5
_
_
=
_
_
36
b
2
+ 4 0
12
b
2
+ 10
0 0 0
12
b
2
+ 10 0
4
b
2
+ 25
_
_
.
In order to solve the scalar product
T
: , given by

T
: =
_

T
_
: =
_

si
g
i
g
s
_
:
_

lm
g
l
g
m
_
,

T
: =
si

lm
g
il
g
sm
=
si
g
il

l
s
=
s
l

l
s
,
rst the coefcient matrix of the tensor w.r.t. the mixed basis is computed by
[
s
l
] =
_

si

[g
il
] ,
[
s
l
] =
_
_
36
b
2
+ 4 0
12
b
2
+ 10
0 0 0
12
b
2
+ 10 0
4
b
2
+ 25
_
_
_
_
1
b
2
0 0
0 1 0
0 0 1
_
_
=
_
_
36
b
4
+
4
b
2
0
12
b
2
+ 10
0 0 0
12
b
4
+
10
b
2
0
4
b
2
+ 25
_
_
,
and then the nal result for the third term is given by
1
3
_

T
_
: =
1
3

T
: =
1
3

k
l

l
s

s
k
=
1
3

s
l

l
s
=
1
3
__
36
b
4
+
4
b
2
_
6
b
2
+
_
12
b
2
+ 10
_
2
b
2
+ 2
_
12
b
4
+
10
b
2
_
+ 5
_
4
b
2
+ 25
__
1
3
_

T
_
: =
72
b
6
+
24
b
4
+
20
b
2
+
125
3
.
Then the complete third invariant w.r.t. the covariant basis of the curvilinear coordinate
system is given by
III

= det =
1
6
(tr )
3

1
2
(tr )
_
tr
2
_
+
1
3
_

T
_
:
=
_
36
b
6
+
90
b
4
+
75
b
2
+
125
6
_

_
108
b
6
+
114
b
4
+
95
b
2
+
125
2
_
+
_
72
b
6
+
24
b
4
+
20
b
2
+
125
3
_
=
1
b
6
(36 108 + 72) +
1
b
4
(90 114 + 24) +
1
b
2
(75 95 + 20)
+
_
125
6

125
2
+
125
3
_
III

= det = 0.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
220 Chapter 6. Exercises
For the third invariant w.r.t. the mixed basis of the curvilinear coordinate system the rst
and second term are already known, because all scalar quantities, like tr , are still the
same, see also the rst case of determining the third invariant. It is only necessary to have
a look at the third term given by
_

T
_
: =
_

i
k
g
k
g
i
_
(
r
s
g
s
g
r
) :
_

l
m
g
l
g
m
_
=
_

i
k

r
s

s
i
g
k
g
r
_
:
_

l
m
g
l
g
m
_
=
_

s
k

r
s
g
k
g
r
_
:
_

l
m
g
l
g
m
_
=
s
k

r
s

l
m

k
l

m
r
=
s
l

m
s

l
m
,
_

T
_
: =
s
l

l
m

m
s
,
i.e. this scalar product is the same like above. With all three terms of the summation given
by the same scalar quantities like above, the third invariant w.r.t. the mixed basis of the
curvilinear coordinate system is given by
III

= det = 0.
The third case, i.e. the third invariant w.r.t. to the basis of the Cartesian coordinate system,
is very easy to solve, because in a Cartesian coordinate system it is sufcient to compute
the determinant of the coefcient matrix of the tensor! For example the determinant of the
coefcient matrix
rs
is expanded about the rst row
det = det [
rs
] =

6
b
2
cos
2
c
2
b
cos c
6
b
2
sin c cos c

2
b
cos c 5
2
b
sin c
6
b
2
sin c cos c
2
b
sin c
6
b
2
sin
2
c

=
6
b
2
cos
2
c
_
30
b
2
sin
2
c
4
b
2
sin
2
c
_
+
2
b
cos c
_

12
b
3
sin
2
c cos c +
12
b
3
sin
2
c cos c
_
+
6
b
2
sin c cos c
_
4
b
2
sin c cos c
30
b
2
sin c cos c
_
=
156
b
4
sin
2
c cos
2
c 0
156
b
4
sin
2
c cos
2
c
det = det [
rs
] = 0.
The third invariant w.r.t. the basis of the Cartesian coordinate system is given by
III

= det = 0.
The nal result is, that the invariants of every arbitrary tensor could be computed w.r.t. any
basis, and still keep the same, i.e.
I

= tr = : 1 =
6
b
2
+ 5,
II

=
1
2
_
(tr )
2
tr
2
_
=
1
2
_
( : 1)
2

T
:
_
=
26
b
2
,
III

= det =
1
6
(tr )
3

1
2
(tr )
_
tr
2
_
+
1
3
_

T
_
:
=
1
6
( : 1)
3

1
2
( : 1)
_

T
:
_
+
1
3
_

T
_
: = 0.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.7. Tensors, Stresses and Cylindrical Coordinates 221
6.7.6 Principal Stress and Principal Directions
Starting with the stress tensor w.r.t. the Cartesian basis, and a normal unit vector, i.e.
=
ik
e
i
e
k
, and n
0
= n
r
e
r
= n
r
e
r
,
the eigenvalue problem is given by
n
0
= n
0
,
and the left-hand side is rewritten in index notation,
(
ik
e
i
e
k
) n
r
e
r
=
ik
n
r
e
i

kr
=
ir
n
r
e
i
.
The eigenvalue problem in index notation w.r.t. the Cartesian basis is given by

ik
n
k
e
i
= n
k
e
k
[ e
l
,

ik
n
k

il
= n
k

kl
,

lk
n
k
= n
k

lk
,
and nally the characteristic equation is given by
(
lk

lk
) n
k
= 0,
det (
ik

ik
) = 0.
The characteristic equation of the eigenvalue problem could be rewritten with the already known
invariants, i.e.
det (
ik

ik
) = III

II

+ I

3
= 0,

3
I

2
+ II

III

= 0,

_
6
b
2
+ 5
_

2
+
26
b
2
0 = 0,

_
6
b
2
+ 5
_
+
26
b
2
_
= 0,
this implies the rst eigenvalue, i.e. the rst principal stress,

1
= 0.
The other eigenvalues are computed by solving the quadratic equation

_
6
b
2
+ 5
_
+
26
b
2
=
2
d + e = 0,

2/3
=
1
2
_
6
b
2
+ 5
_

1
4
_
6
b
2
+ 5
_
2

26
b
2
=
1
2
d
_
d
2
4
e,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
222 Chapter 6. Exercises
this implies the second and third eigenvalue, i.e. the second and third principal stress,

2
=
1
2
d +
_
d
2
4
e , and
3
=
1
2
d
_
d
2
4
e , with d =
6
b
2
+ 5 , and e =
26
b
2
.
In order to compute the principal directions the stress tensor w.r.t. the curvilinear basis, and a
normal unit vector, i.e.
=
ik
g
i
g
k
, and n = n
r
g
r
,
are used, then the eigenvalue problem is given by
n = n,
and the left-hand side is rewritten in index notation,
n =
_

ik
g
i
g
k
_
n
r
g
r
=
ik
n
r
g
kr
g
i
=
i
r
n
r
g
i
.
The eigenvalue problem in index notation w.r.t. the curvilinear basis is given by

i
k
n
k
g
i
= n
i
g
i
= n
k

i
k
g
i
,
_

i
k

i
k
_
n
k
g
i
= 0,
_

i
k

i
k
_
n
k
= 0,
and in matrix notation
_
_
6
b
2

i
0 2
0
i
0
2
b
2
0 5
i
_
_
_
_
n
i1
n
i2
n
i3
_
_
= 0.
Combining the rst and the last row of this system of equations yields an equation to determine
the coefcient n
i3
depending on the associated principal stress
i
, i.e.
_
(5
i
)
_
6
b
2

i
_

4
b
2
_
n
i3
=
_
26
b
2

i
_
5 +
6
b
2
_
+
2
i
_
n
i3
=
_
e
i
d +
2
i

n
i3
= 0.
The coefcient n
i2
could be computed by the second line of this system of equations, i.e.

i
n
i2
= 0,
and then the coefcient n
i1
depending on the associated principal stress
i
and the already known
coefcient n
i3
is given by
n
i1
=
2n
i3
6
b
2

i
.
The associated principal direction to the principal stresses are computed by inserting the values

i
in the equations above.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.7. Tensors, Stresses and Cylindrical Coordinates 223
The principal stress
1
= 0 implies
n
13
= 0 n
11
= 0 n
12
= R,
n
1
= n
1
g
1
+ n
2
g
2
+ n
3
g
3
= 0 + g
2
+ 0 = g
2
,
n
1
=
_
_
sin c
0
cos c
_
_
.
The principal stress
2
=
1
2
d +
_
d
2
4
e implies
n
13
= R n
12
= 0 n
11
=
1
2
b
2
(5
2
) = ,
n
2
= n
1
g
1
+ n
2
g
2
+ n
3
g
3
= g
1
+ 0 + g
3
,
n
2
=
_
_
b cos c
1
b sin c
_
_
, with =
1
2
b
2
(5
2
) .
The principal stress
3
=
1
2
d
_
d
2
4
e implies
n
3
=
_
_
b cos c
1
b sin c
_
_
, with =
1
2
b
2
(5
3
) .
6.7.7 Deformation Energy
The specic deformation energy is dened by
W
spec
=
1
2
: , with =
ik
g
i
g
k
, and =
1
100
(g
ik

ik
) g
i
g
k
,
and solving this product yields
W
spec
=
1
2
: =
1
2
_

lm
g
l
g
m
_
:
_
1
100
(g
ik

ik
) g
i
g
k
_
=
1
200

lm
(g
ik

ik
)
i
l

k
m
=
1
200

ik
(g
ik

ik
)
=
1
200

ik
_
g
ik

k
i
_
,
W
spec
=
1
2
: =
1
200
_

i
i

ii
_
,
because the Kronecker delta
ik
is given w.r.t. to the Cartesian basis, i.e.

ik
=
k
i
=
i
k
=
k
i
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
224 Chapter 6. Exercises
With the trace of the stress tensor w.r.t. to the mixed basis of the curvilinear coordinate system,
and the trace of the stress tensor w.r.t. to the covariant basis of the curvilinear coordinate system
given by

i
i
=
6
b
2
+ 5 , and
ii
= 11,
the specic deformation energy is given by
W
spec
=
1
2
: =
1
200
_
6
b
2
6
_
,
and nally at the point P,
W
spec
=
1
200
_
54
2
25
6
_
0.0766.
6.7.8 Normal and Shear Stress
The normal vector n is dened by
n =
g
1
+g
3
[g
1
+g
3
[
,
and this implies with
g
1
+g
3
=
_
_
1
b
cos c
1
1
b
sin c
_
_
, and [g
1
+g
3
[ =
_
1
b
2
+ 1,
nally
n =
1

b
2
+ 1
_
_
cos c
b
sin c
_
_
= n
r
e
r
= n
r
e
r
.
With the stress tensor w.r.t. the Cartesian basis,
=
ik
e
i
e
k
,
the stress vector t
n
at the point P is given by
t
n
= n =
ik
e
i
e
k
n
r
e
r
=
ik
n
r

kr
e
i
=
ik
n
k
e
i
,
and in index notation
t
n
= t
i
e
i
=
ik
n
k
e
i
t
i
=
ik
n
k
.
The matrix multiplication of the coefcient matrices,
[t
i
] = [
ik
] [n
k
] =
_
_
6
b
2
cos
2
c
2
b
cos c
6
b
2
sin c cos c

2
b
cos c 5
2
b
sin c
6
b
2
sin c cos c
2
b
sin c
6
b
2
sin
2
c
_
_
1

b
2
+ 1
_
_
cos c
b
sin c
_
_
,
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
6.7. Tensors, Stresses and Cylindrical Coordinates 225
implies nally the stress vector t
n
at the point P,
t
n
=
1

b
2
+ 1
_
_
_
6
b
2
+ 2
_
cos c

2
b
5b
_
6
b
2
+ 2
_
sin c
_
_
=
1
b
2

b
2
+ 1
_
_
(6 + 2b
2
) cos c
2b 5b
3
(6 + 2b
2
) sin c
_
_
.
The normal stress vector is dened by
t

= n , with = [t

[ = t
n
n,
and the shear stress vector is dened by
t

= t
n
t

.
The absolute value of the normal stress vector t

is computed by
= t
n
n =
1
b
2

b
2
+ 1
_
_
(6 + 2b
2
) cos c
2b 5b
3
(6 + 2b
2
) sin c
_
_
1

b
2
+ 1
_
_
cos c
b
sin c
_
_
= t
n
n =
5b
4
+ 4b
4
+ 6
b
2
(b
2
+ 1)
.
This implies the normal stress vector
t

= n =
5b
4
+ 4b
2
+ 6
b
2
(b
2
+ 1)

b
2
+ 1
_
_
cos c
b
sin c
_
_
,
and the shear stress vector
t

= t
n
t

=
1
b
2

b
2
+ 1
_
_
(6 + 2b
2
) cos c
2b 5b
3
(6 + 2b
2
) sin c
_
_

5b
4
+ 4b
2
+ 6
b
2
(b
2
+ 1)

b
2
+ 1
_
_
cos c
b
sin c
_
_
,
t

=
4 3b
2
(b
2
+ 1)

b
2
+ 1
_
_
cos c
1
b
sin c
_
_
.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
226 Chapter 6. Exercises
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Appendix A
Formulary
A.1 Formulary Tensor Algebra
A.1.1 Basis
e
i
orthonormal cartesian base vectors E
n
g
i
covariant base vectors E
n
g
i
contravariant base vectors E
n
A.1.2 Metric Coefcients, Raising and Lowering of Indices
metric coefcients
g
ik
= g
i
g
k
g
ik
= g
i
g
k

i
k
= g
i
g
k

i
k
= e
i
e
k
(
ik
= e
i
e
k
)
raising and lowering of indices
g
i
= g
ik
g
k
g
i
= g
ik
g
k
g
i
=
i
k
g
k
e
i
=
i
k
e
k
(e
i
=
ik
e
k
)
A.1.3 Vectors in a General Basis
v = v
i
g
i
= v
i
g
i
A.1.4 Second Order Tensors in a General Basis
T = T
ik
g
i
g
k
= T
ik
g
i
g
k
= T
i
k
g
i
g
k
= T
k
i
g
i
g
k
227
228 Appendix A. Formulary
A.1.5 Linear Mappings with Tensors
for the tensor Aof rank 1
A = u v
A w = (u v) w = (v w) u
=
_
u
i
g
i
v
k
g
k
_
w
m
g
m
=
_
v
k
g
k
w
m
g
m
_
u
i
g
i
=
_
v
k
w
m
g
k
m
_
u
i
g
i
= v
k
w
k
u
i
g
i
for the general tensor T of rank n
_
det T
ik
,= 0
_
T = T
ik
g
i
g
k
T w =
_
T
ik
g
i
g
k
_
w
m
g
m
= T
ik
w
k
g
i
A.1.6 Unit Tensor (Identity, Metric Tensor)
u = 1 u
mit 1 = g
i
g
i
= g
j
g
j
=
j
i
g
j
g
i
= g
ij
g
i
g
j
= g
ij
g
i
g
j
u =
_
g
i
g
i
_
u
k
g
k
= u
k
g
ik
g
i
= u
k
g
k
= u
i
g
i
= u
A.1.7 Tensor Product
u = A w und w = B v u = A B v = C v
C = A B =
_
A
ik
g
i
g
k
_
(B
mn
g
m
g
n
)
= A
ik
B
mn

m
k
g
i
g
n
= A
ik
B
kn
g
i
g
k
A.1.8 Scalar Product or Inner Product
= A : B
=
_
A
ik
g
i
g
k
_
: (B
mn
g
m
g
n
) = A
ik
B
mn
(g
i
g
m
) (g
k
g
n
)
= A
ik
B
mn
g
im
g
kn
= A
ik
B
ik
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
A.1. Formulary Tensor Algebra 229
A.1.9 Transpose of a Tensor
u (T v) =
_
v T
T
_
u
with T = T
ik
g
i
g
k
and T
T
= T
ik
g
k
g
i
= T
ki
g
i
g
k
(u v)
T
= v u
(A B)
T
= B
T
A
T
with
_
A
T
_
T
= A
A.1.10 Computing the Tensor Components
T
ik
= g
i

_
T g
k
_
; T
ik
= g
i
(T g
k
) ; T
i
k
= g
i
(T g
k
)
T
ik
= g
im
g
kn
T
mn
; T
i
k
= g
im
T
mk
; etc.
A.1.11 Orthogonal Tensor, Inverse of a Tensor
orthonormal tensor
Q
T
= Q
1
; Q
T
Q = Q
1
Q = 1 = Q Q
T
; det Q = 1
Q
i
k
=
_
Q
k
i
_
1
; Q
mi
Q

mk
=
i
k
v = Q u (Q u) (Q u) = u u ; i.e. v v = u u
A.1.12 Trace of a Tensor
tr (a b) := a b resp. tr (a b) = a
i
g
i
b
k
g
k
= a
i
b
i
tr T = T : 1 =
_
T
ik
g
i
g
k
_
: (g
m
g
m
) = T
ik
g
im

m
k
= T
ik
g
ik
= T
i
i
tr (A B) = A : B
T
or A B = tr
_
A B
T
_
= tr
_
B
T
A
_
tr (A B) = tr (B A) = B : A
T
tr (A B) = tr
__
A
ik
g
i
g
k

[B
mn
g
m
g
n
]
_
= tr
_
a
ik
B
kn
g
i
g
n
_
= A
ik
B
kn
g
i
g
n
= A
ik
B
ki
etc.
A.1.13 Changing the Basis
transformation g
i
g
i
; g
k
g
k
g
i
= 1 g
i
=
_
g
k
g
k
_
g
i
=
_
g
k
g
i
_
g
k
= A
k
i
g
k
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
230 Appendix A. Formulary
g
i
= A g
i
with A =
_
g
k
g
m
_
g
k
g
m
= A
k
m
g
k
g
m
g
i
=
_
g
k
g
i
_
g
k
= A
k
i
g
k
g
i
= A g
i
with A = (g
k
g
m
) g
k
g
m
= A
km
g
k
g
m
g
i
= (g
k
g
i
) g
k
= A
ki
g
k
g
i
= 1 g
i
=
_
g
k
g
k
_
g
i
=
_
g
k
g
i
_
g
k
= B
i
k
g
k
g
i
= B g
i
with B = (g
k
g
m
) g
k
g
m
= B
m
k
g
k
g
m
g
i
=
_
g
k
g
i
_
g
k
= B
i
k
g
k
g
i
= B g
i
with B =
_
g
k
g
m
_
g
k
g
m
= B
km
g
k
g
m
g
i
=
_
g
k
g
i
_
g
k
= B
ki
g
k
inverse relations g
i
g
i
; g
k
g
k
g
i
= 1 g
i
=
_
g
k
g
k
_
g
i
=
_
g
k
g
i
_
g
k
= A
k
i
g
k
g
i
= A g
i
with A =
_
g
k
g
m
_
g
k
g
m
= A
k
m
g
k
g
m
g
i
=
_
g
k
g
i
_
g
k
= A
k
i
g
k
g
i
= A g
i
with A = (g
m
g
k
) g
m
g
k
= A
mk
g
m
g
k
g
i
= (g
k
g
i
) g
k
= A
ki
g
k
g
i
= 1 g
i
=
_
g
k
g
k
_
g
i
=
_
g
k
g
i
_
g
k
= B
i
k
g
k
g
i
= B g
i
with B = (g
k
g
m
) g
k
g
m
= B
m
k
g
k
g
m
g
i
=
_
g
k
g
i
_
g
k
= B
i
k
g
k
g
i
= B g
i
with B =
_
g
k
g
m
_
g
k
g
m
= B
km
g
k
g
m
g
i
=
_
g
k
g
i
_
g
k
= B
ki
g
k
The following relations between the transformation tensors hold
A A = 1 or A A = 1
A
m
i
A
k
m
=
k
i
etc. A
m
i
A
k
m
=
k
i
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
A.1. Formulary Tensor Algebra 231
etc.
B B = 1 or B B = 1
B
i
m
B
m
k
=
i
k
etc. B
i
m
B
m
k
=
i
k
Furthermore
A
m
i
B
k
m
=
k
i
; B
k
m
= A
k
m
; B
k
m
= A
k
m
A.1.14 Transformation of Vector Components
v = v
i
g
i
= v
i
g
i
= v
i
g
i
= v
i
g
i
The components of a vector transform with the following rules of transformation
v
i
= A
k
i
v
k
= A
ki
v
k
,
v
i
= B
i
k
v
k
= B
ki
v
k
,
etc.
v
i
= A
k
i
v
k
= A
ki
v
k
,
v
i
= B
i
k
v
k
= B
ki
v
k
,
i.e. the coefcients of the vector components transform while changing the coordinate systems
like the base vectors themselves.
A.1.15 Transformation Rules for Tensors
T = T
ik
g
i
g
k
= T
i
k
g
i
g
k
= T
ik
g
i
g
k
= T
k
i
g
i
g
k
= T
ik
g
i
g
k
= T
i
k
g
i
g
k
= T
ik
g
i
g
k
= T
k
i
g
i
g
k
the transformation relations between base vectors imply
T
ik
= A
i
m
A
k
n
T
mn
,
T
i
k
= A
i
m
A
n
k
T
m
n
,
T
ik
= A
m
i
A
n
k
T
mn
,
i.e. the coefcients of the tensor components transform like the tensor basis. The tensor basis
transform like the base vectors.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
232 Appendix A. Formulary
A.1.16 Eigenvalues of a Tensor in Euclidean Space
EWP : (T1) = 0 ;
_
T
i
k

i
k
_
x
i
= 0
conditions for non-trivial results
det (T1)
!
= 0 ;
det
_
T
i
k

i
k
_
x
i
= 0
characteristic polynomial
f () = I
3
I
2
+
2
I
1

3
= 0
If T = T
T
; T
i
k
R, then the eigenvectors are orthogonal and the eigenvalues are real. invariants
of a tensor
I
1
=
1
+
2
+
3
= tr T = T
i
k
I
2
=
1

2
+
2

3
+
3

1
=
1
2
_
(tr T)
2
tr T
2

=
1
2
_
T
i
i
T
k
k
T
i
k
T
k
i

I
3
=
1

3
= det T = det
_
T
i
k
_
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
A.2. Formulary Tensor Analysis 233
A.2 Formulary Tensor Analysis
A.2.1 Derivatives of Vectos and Tensors
scalars, vectors and tensors as functions of a vector of position
= (x) ; v = v (x) ; T = T(x)
A vector eld v = v (x) is differentiable in x, if a linear mapping L(x) exists, such that
v (x +y) = v (x) +L(x) y +O
_
y
2
_
, if [y[ 0.
The mapping L(x) is called the gradient or Frechet derivative v

(x), also represented by the


operator
L(x) = grad v (x) .
Analogous for a scalar valued vector function (x)
(x +y) = (x) + grad (x) y +O
_
y
2
_
rules
grad () = grad + grad
grad (v w) = (grad v)
T
w+ (grad w)
T
v
grad (v) = v grad + grad v
grad (v w) = [(grad v) w] grad w
The gradient of a scalar valued vector function leads to a vector valued vector function. The
gradient of a vector valued vector function leads analogous to a tensor valued vector function.
divergence of a vector
div = tr (grad v) = grad v : 1
divergence of a tensor
div T = div
_
T
T

_
= grad
_
T
T

_
: 1
rules
div (v) = v grad + grad v
div (T v) = v
_
div T
T
_
+T
T
: grad v
div (grad v)
T
= grad (div v)
A.2.2 Derivatives of Base Vectors
Chirtoffel tensors

(k)
:= grad (g
k
) ;
(k)
=
i
km
g
i
g
m
components

i
km
=
i
(k)m
= g
i

(k)
g
m
cartesian orthogonal coordinate systems the Christoffel tensors vanish.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
234 Appendix A. Formulary
A.2.3 Derivatives of Base Vectors in Components Notation
( )
k

i
= ( )
k,i
with
(k)
= grad (g
k
) = g
k,i
g
i
g
i,k
=
(i)
g
k
=
_

s
il
g
s
g
l
_
g
k
=
s
ik
g
s
;
g
i,k
g
s
=
s
ik
etc. g
i
,k
=
i
sk
g
s

ikl
= g
ls

s
ik
etc. =
1
2
(g
kl,i
+ g
il,k
g
ik,l
)
e
i,k
= 0
A.2.4 Components Notation of Vector Derivatives
grad v (x) =
v

k
g
k
=
(v
i
g
i
)

k
g
k
=
v
i

k
g
i
g
k
+ v
i
g
i

k
g
k
. .

(i)
= v
i
,k
g
i
g
k
+ v
i

(i)
= v
i
,k
g
i
g
k
+ v
i
g
i,k
g
k
grad v (x) =
_
v
i
,k
+ v
s

i
sk
_
g
i
g
k
= v
i
[
k
g
i
g
k
div v (x) = tr (grad v) = v
i
,i
+ v
s

i
si
= v
i
[
i
A.2.5 Components Notation of Tensor Derivatives
div T(x) =
T

k
g
k
=
(T
ij
g
i
g
j
)

k
g
k
= T
ij
,k
(g
i
g
j
) g
k
+ T
ij
_
g
i

k
g
j
_
g
k
+ T
ij
_
g
i

g
j

k
_
g
k
= T
ik
,k
g
i
+ T
ik

s
ik
g
s
+ T
ij
_
g
i

s
jk
g
s
_
g
k
=
_
T
ik
,k
+ T
mk

i
mk
+ T
ij

k
jk
_
g
i
= T
ik
[
k
g
i
=
_
T
k
i,k
T
k
m

m
ik
+ T
m
i

k
km
_
g
i
= T
k
i
[
k
g
i
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
A.2. Formulary Tensor Analysis 235
A.2.6 Integral Theorems, Divergence Theorems
_
A
u nda =
_
V
div udV
_
A
u
i
n
i
da =
_
V
u
i
[
i
dV
_
A
T nda =
_
V
div TdV
_
A
T
k
i
n
k
g
i
da =
_
V
T
k
i
[
k
g
i
dV
with n normal vector of the surface element
A surface
V volume
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
236 Appendix A. Formulary
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Appendix B
Nomenclature
Notation Description
, , , . . . scalar quantities in R
a, b, c, . . . column matrices or vectors in R
n
a
T
, b
T
, c
T
, . . . row matrices or vectors in R
n
A, B, C, . . . matrices in R
n
R
n
a, b, c, . . . vectors or rst order tensors in E
n
A, B, C, . . . second order tensors in E
n
E
n
3
A,
3
B,
3
C, . . . third order tensors in E
n
E
n
E
n
A, B, C, . . . fourth order tensors in E
n
E
n
E
n
E
n
Notation Description
tr the trace operator of a tensor or a matrix
det the determinant operator of a tensor or a matrix
sym the symmetric part of a tensor or a matrix
skew the antisymmetric or skew part of a tensor or a matrix
dev the deviator part of a tensor or a matrix
grad = the gradient operator
div the divergence operator
rot the rotation operator
the laplacian or the Laplace operator
237
238 Appendix B. Nomenclature
Notation Description
R the set of the real numbers
R
3
the set of real-valued triples
E
3
the 3-dimensional Euclidean vector space
E
3
E
3
the space of second order tensors over the Euclidean vector space
e
1
, e
2
, e
3
3-dimensional Cartesian basis
g
1
, g
2
, g
3
3-dimensional arbitrary covariant basis
g
1
, g
2
, g
3
3-dimensional arbitrary contravariant basis
g
ij
covariant metric coefcients
g
ij
contravariant metric coefcients
g = g
ij
g
i
g
j
metric tensor
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Bibliography
[1] Ralph Abraham, Jerrold E. Marsden, and Tudor Ratiu. Manifolds, Tensor Analysis and
Applications. Applied Mathematical Sciences. Springer-Verlag, Berlin, Heidelberg, New
York, second edition, 1988.
[2] Albrecht Beutelspacher. Lineare Algebra. Vieweg Verlag, Braunschweig, Wiesbaden,
1998.
[3] Reint de Boer. Vektor- und Tensorrechnung fr Ingenieure. Springer-Verlag, Berlin, Hei-
delberg, New York, 1982.
[4] Gerd Fischer. Lineare Algebra. Vieweg Verlag, Braunschweig, Wiesbaden, 1997.
[5] Jimmie Gilbert and Linda Gilbert. Linear Algebra and Matrix Theory. Academic Press,
San Diego, 1995.
[6] Paul R. Halmos. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics.
Springer-Verlag, Berlin, Heidelberg, New York, 1974.
[7] Hans Karl Iben. Tensorrechnung. Mathematik fr Ingenieure und Naturwissenschaftler.
Teubner-Verlag, Stuttgart, Leipzig, 1999.
[8] Klaus Jnich. Lineare Algebra. Springer-Verlag, Berlin, Heidelberg, New York, 1998.
[9] Wilhelm Klingenberg. Lineare Algebra und Geometrie. Springer-Verlag, Berlin, Heidel-
berg, New York, second edition, 1992.
[10] Allan D. Kraus. Matrices for Engineers. Springer-Verlag, Berlin, Heidelberg, New York,
1987.
[11] Paul C. Matthews. Vector Calculus. Undergraduate Mathematics Series. Springer-Verlag,
Berlin, Heidelberg, New York, 1998.
[12] James G. Simmonds. A Brief on Tensor Analysis. Undergraduate Texts in Mathematics.
Springer-Verlag, Berlin, Heidelberg, New York, second edition, 1994.
[13] Erwin Stein. Unterlagen zur Vorlesung Mathematik V fr konstr. Ingenieure Matrizen-
und Tensorrechnung SS 94. Institut fr Baumechanik und Numerische Mechanik, Univer-
sitt Hannover, 1994.
239
240 Bibliography
[14] Rudolf Zurmhl. Matrizen und ihre technischen Anwendungen. Springer-Verlag, Berlin,
Heidelberg, New York, fourth edition, 1964.
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Glossary English German
L1-norm - Integralnorm, L1-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
L2-norm - L2-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
l1-norm - Summennorm, l1-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
l2-norm, Euclidian norm - l2-Norm, euklidische Norm. . . . . . . . . . . . . . . . . . . . . . . . . . 19
n-tuple - n-Tupel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
p-norm - Maximumsnorm, p-Norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
absolute norm - Gesamtnorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
absolute value - Betrag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
absolute value of a tensor - Betrag eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . 111, 113
additive - additiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
additive identity - additionsneutrales Element . . . . . . . . . . . . . . . . . . . . . . . 10, 42
additive inverse - inverses Element der Addition . . . . . . . . . . . . . . . . . . . . 10, 42
afne vector - afner Vektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
afne vector space - afner Vektorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 54
antisymmetric - schiefsymmetrisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
antisymmetric matrix - scheifsymmetrische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 41
antisymmetric part - antisymmetrischer Anteil . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
antisymmetric part of a tensor - antisymmetrischer Anteil eines Tensors . . . . . . . . . . . . . . 115
area vector - Flchenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
associative - assoziativ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
associative rule - Assoziativgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
associative under matrix addition -
assoziativ bzgl. Matrizenaddition . . . . . . . . . . . . . . . . . . . . . 42
base vectors - Basisvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 87
basis - Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
241
242 Glossary English German
basis of the vector space - Basis eines Vektorraums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
bijective - bijektiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
bilinear - bilinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
bilinear form - Bilinearform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
binormal unit - Binormaleneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . 137
binormal unit vector - Binormaleneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Cartesian base vectors - kartesische Basisvektoren. . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Cartesian basis - kartesische Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Cartesian components of a permutation tensor -
kartesische Komponenten des Permutationstensor . . . . . . 87
Cartesian coordinates - kartesische Koordinaten . . . . . . . . . . . . . . . . . . . . . 78, 82, 144
Cauchy stress tensor - Cauchy-Spannungstensor . . . . . . . . . . . . . . . . . . . . . . . 96, 120
Cauchys inequality - Schwarzsche oder Cauchy-Schwarzsche Ungleichung . . 21
Cayley-Hamilton Theorem - Cayley-Hamilton Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . 71
characteristic equation - charakteristische Gleichung . . . . . . . . . . . . . . . . . . . . . . . . . . 65
characteristic matrix - charakteristische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
characteristic polynomial - charakteristisches Polynom . . . . . . . . . . . . . . . . . . 56, 65, 123
Christoffel symbol - Christoffel-Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
cofactor - Kofaktor, algebraisches Komplement . . . . . . . . . . . . . . . . . 51
column - Spalte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
column index - Spaltenindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
column matrix - Spaltenmatrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 40, 46
column vector - Spaltenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 40, 46, 59
combination - Kombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 34, 54
commutative - kommutativ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
commutative matrix - kommutative Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
commutative rule - Kommutativgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
commutative under matrix addition -
kommutativ bzgl. Matrizenaddition . . . . . . . . . . . . . . . . . . . 42
compatibility of vector and matrix norms -
Vertrglichkeit von Vektor- und Matrix-Norm. . . . . . . . . . 22
compatible - vertrglich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
complete fourth order tensor - vollstndiger Tensor vierter Stufe. . . . . . . . . . . . . . . . . . . . 129
complete second order tensor - vollstndige Tensor zweiter Stufe. . . . . . . . . . . . . . . . . . . . . 99
complete third order tensor - vollstndiger Tensor dritter Stufe . . . . . . . . . . . . . . . . . . . . 129
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Glossary English German 243
complex conjugate eigenvalues - konjugiert komplexe Eigenwerte . . . . . . . . . . . . . . . . . . . . 124
complex numbers - komplexe Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
components - Komponenten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 65
components of the Christoffel symbol -
Komponenten des Christoffel-Symbols . . . . . . . . . . . . . . 142
components of the permutation tensor -
Komponenten des Permutationstensors . . . . . . . . . . . . . . . . 87
composition - Komposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34, 54, 106
congruence transformation - Kongruenztransformation, kontragrediente Transformation
56, 63
congruent - kongruent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56, 63
continuum - Kontinuum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
contravariant symbol - kontravariantes -Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
contravariant base vectors - kontravariante Basisvektoren . . . . . . . . . . . . . . . . . . . . 81, 139
contravariant base vectors of the natural basis -
kontravariante Basisvektoren der natrlichen Basis . . . . 141
contravariant coordinates - kontravariante Koordinaten, Koefzienten . . . . . . . . . 80, 84
contravariant metric coefcients -
kontravariante Metrikkoefzienten . . . . . . . . . . . . . . . . 82, 83
coordinates - Koordinaten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
covariant symbol - kovariantes -Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
covariant base vectors - kovariante Basisvektoren. . . . . . . . . . . . . . . . . . . . . . . . 80, 138
covariant base vectors of the natural basis -
kovariante Basisvektoren der natrlichen Basis . . . . . . . 140
covariant coordinates - kovariante Koordinaten, Koefzienten . . . . . . . . . . . . . 81, 84
covariant derivative - kovariante Ableitung . . . . . . . . . . . . . . . . . . . . . . . . . . 146, 149
covariant metric coefcients - kovariante Metrikkoefzienten. . . . . . . . . . . . . . . . . . . . 80, 83
cross product - Kreuzprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87, 90, 96
curl - Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
curvature - Krmmung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
curvature of a curve - Krmmung einer Kurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
curved surface - Raumche, gekrmmte Oberche . . . . . . . . . . . . . . . . . 138
curvilinear coordinate system - krummliniges Koordinatensystem . . . . . . . . . . . . . . . . . . . 139
curvilinear coordinates - krummlinige Koordinaten . . . . . . . . . . . . . . . . . . . . . . 139, 144
denite metric - denite Metrik. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
denite norm - denite Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
244 Glossary English German
deformation energy - Formnderungsenergie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
deformation gradient - Deformationsgradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
derivative of a scalar - Ableitung einer skalaren Gre . . . . . . . . . . . . . . . . . . . . . 133
derivative of a tensor - Ableitung eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
derivative of a vector - Ableitung eines Vektors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
derivative w.r.t. a scalar variable -
Ableitung nach einer skalaren Gre . . . . . . . . . . . . . . . . 133
derivatives - Ableitungen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
derivatives of base vectors - Ableitungen von Basisvektoren . . . . . . . . . . . . . . . . . 141, 145
determinant - Determinante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50, 65, 89
determinant expansion by minors -
Determinantenentwicklungssatz mit Unterdeterminanten51
determinant of a tensor - Determinante eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . 112
determinant of the contravariant metric coefcients -
Determinante der kontravarianten Metrikkoefzienten . . 83
determinant of the contravariant metric coefcients -
Determinante der kovarianten Metrikkoefzienten. . . . . . 83
determinant of the Jacobian matrix -
Determinante der Jacobimatrix . . . . . . . . . . . . . . . . . . . . . . 140
deviator matrix - Deviatormatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
deviator part of a tensor - Deviator eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
diagonal matrix - Diagonalmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 43
differential element of area - differentielles Flchenelement . . . . . . . . . . . . . . . . . . . . . . 139
dimension - Dimension. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13, 14
direct method - direkte Methode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
direct product - direktes Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
directions of principal stress - Hauptspannungsrichtungen . . . . . . . . . . . . . . . . . . . . . . . . . 120
discrete metric - diskrete Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
distance - Abstand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
distributive - distributiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
distributive law - Distributivgesetz. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
distributive w.r.t. addition - Distributivgesetz. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
divergence of a tensor eld - Divergenz eines Tensorfeldes . . . . . . . . . . . . . . . . . . . . . . . 147
divergence of a vector eld - Divergenz eines Vektorfeldes . . . . . . . . . . . . . . . . . . . . . . . 147
divergence theorem - Divergenztheorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
domain - Denitionsbereich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Glossary English German 245
dot product - Punktprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
dual space - Dualraum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36, 97
dual vector space - dualer Vektoraum, Dualraum. . . . . . . . . . . . . . . . . . . . . . . . . 36
dummy index - stummer Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
dyadic product - dyadisches Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9496
eigenvalue - Eigenwert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 120
eigenvalue problem - Eigenwertproblem. . . . . . . . . . . . . . . . . . . . . . 22, 65, 122, 123
eigenvalues - Eigenwerte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 56
eigenvector - Eigenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 120
eigenvector matrix - Eigenvektormatrix, Modalmatrix . . . . . . . . . . . . . . . . . . . . . 70
elastic - elastisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
elasticity tensor - Elastizittstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
elasticity theory - Elastizittstheorie. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
elements - Elemente. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
empty set - leere Menge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
equilibrium conditions - Gleichgewichtsbedingungen . . . . . . . . . . . . . . . . . . . . . . . . . 96
equilibrium conditon of moments -
Momentengleichgewichtsbedingung . . . . . . . . . . . . . . . . . 120
equilibrium system of external forces -
Gleichgewicht der ueren Krfte . . . . . . . . . . . . . . . . . . . . 96
equlibirum system of forces - Krftegleichgewicht . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Euclidean norm - euklidische Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 30, 85
Euclidean space - Euklidischer Raum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Euclidean vector - euklidische Vektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Euclidean vector space - euklidischer Vektorraum. . . . . . . . . . . . . . . . . . . . . 26, 29, 143
Euklidian matrix norm - Euklidische Matrixnorm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
even permutation - gerade Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
exact differential - vollstndiges Differential . . . . . . . . . . . . . . . . . . . . . . 133, 135
eld - Feld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
eld - Krper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
nite - endlich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
nite element method - Finite-Element-Methode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
rst order tensor - Tensor erster Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
fourth order tensor - Tensor vierter Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
246 Glossary English German
Frechet derivative - Frechet Ableitung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
free indices - freier Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
function - Funktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
fundamental tensor - Fundamentaltensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Gauss transformation - Gausche Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Gausss theorem - Gaussscher Integralsatz . . . . . . . . . . . . . . . . . . . . . . . 155, 158
general eigenvalue problem - allgemeines Eigenwertproblem. . . . . . . . . . . . . . . . . . . . . . . 69
general permutation symbol - allgemeines Permutationssymbol . . . . . . . . . . . . . . . . . . . . . 92
gradient - Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
gradient of a vector of position - Gradient eines Ortsvektors . . . . . . . . . . . . . . . . . . . . . . . . . . 144
higher order tensor - Tensor hherer Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
homeomorphic - homomorph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30, 97
homeomorphism - Homomorphismus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
homogeneous - homogen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
homogeneous linear equation system -
homogenes lineares Gleichungssystem. . . . . . . . . . . . . . . . 65
homogeneous norm - homogene Norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
homomorphism - Homomorphismus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Hookes law - Hookesche Gesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Hlder sum inequality - Hldersche Ungleichung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
identities for scalar products of tensors -
Rechenregeln fr Skalarprodukte von Tensoren . . . . . . . 110
identities for tensor products - Rechenregeln fr Tensorprodukte . . . . . . . . . . . . . . . . . . . 106
identity element w.r.t. addition - neutrales Element der Addition. . . . . . . . . . . . . . . . . . . . . . . 10
identity element w.r.t. scalar multiplication -
neutrales Element der Multiplikation. . . . . . . . . . . . . . . . . . 10
identity matrix - Einheitsmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 45
identity matrix - Einheitsmatrix, Identitt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
identity tensor - Einheitstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
image set - Bildbereich. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
innitesimal - innitesimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
innitesimal tetrahedron - innitesimaler Tetraeder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
injective - injektiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
inner prodcut space - innerer Produktraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Glossary English German 247
inner product - inneres Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 85
inner product of tensors - inneres Produkt von Tensoren . . . . . . . . . . . . . . . . . . . . . . . 110
inner product space - innerer Produktraum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 27
integers - ganze Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
integral theorem - Integralsatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
intersection - Schnittmenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
invariance - Invarianz. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
invariant - Invariante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120, 123, 148
invariant - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
inverse - Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8, 34
inverse of a matrix - inverse Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
inverse of a tensor - inverser Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
inverse relation - inverse Beziehung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
inverse transformation - inverse Transformation . . . . . . . . . . . . . . . . . . . . . . . . 101, 103
inverse w.r.t. addition - inverses Element der Addition . . . . . . . . . . . . . . . . . . . . . . . 10
inverse w.r.t. multiplication - inverses Element der Multiplikation . . . . . . . . . . . . . . . . . . 10
inversion - Umkehrung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
invertible - invertierbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
isomorphic - isomorph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29, 35
isomorphism - Isomorphimus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
isotropic - isotrop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
isotropic tensor - isotroper Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
iterative process - Iterationsvorschrift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Jacobian - Jacobi-Determinante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Kronecker delta - Kronecker-Delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 79, 119
l-innity-norm, maximum-norm -
Maximumnorm, -Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Laplace operator - Laplace-Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
laplacian of a scalar eld - Laplace-Operator eines Skalarfeldes . . . . . . . . . . . . . . . . . 150
laplacian of a tensor eld - Laplace-Operator eines Tensorfeldes. . . . . . . . . . . . . . . . . 151
laplacian of a vector eld - Laplace-Operator eines Vektorfeldes . . . . . . . . . . . . . . . . . 150
left-hand Cauchy strain tensor - linker Cauchy-Strecktensor . . . . . . . . . . . . . . . . . . . . . . . . . 117
line element - Linienelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
248 Glossary English German
linear - linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
linear algebra - lineare Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
linear combination - Linearkombination . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 49, 71
linear dependence - lineare Abhngigkeit . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30, 62
linear equation system - lineares Gleichungssytem. . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
linear form - Linearform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
linear independence - lineare Unabhngigkeit . . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30
linear manifold - lineare Mannigfaltigkeit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
linear mapping - lineare Abbildung. . . . . . . . . . . . . . . . . . . 32, 54, 97, 105, 121
linear operator - linearer Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
linear space - linearer Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
linear subspace - linearerUnterraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
linear transformation - lineare Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
linear vector space - linearer Vektorraum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
linearity - Linearitt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 34
linearly dependent - linear abhngig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 23, 49
linearly independent - linear unabhngig . . . . . . . . . . . . . . . . . . . . . 15, 23, 48, 59, 66
lowering an index - Senken eines Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
main diagonal - Hauptdiagonale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 65
map - Abbildung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
mapping - Abbildung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
matrix - Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
matrix calculus - Matrizenalgebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
matrix multiplication - Matrizenmultiplikation. . . . . . . . . . . . . . . . . . . . . . . . . . . 42, 54
matrix norm - Matrix-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 22
matrix transpose - transponierte Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
maximum absolute column sum norm -
Spaltennorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
maximum absolute row sum norm -
Zeilennorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
maximum-norm - Maximumsnorm, p-Norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
mean value - Mittelwert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
metric - Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
metric coefcients - Metrikkoefzienten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Glossary English German 249
metric space - metrischer Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
metric tensor of covariant coefcients -
Metriktensor mit kovarianten Koefzienten. . . . . . . . . . . 102
mixed components - gemischte Komponenten . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
mixed formulation of a second order tensor -
gemischte Formulierung eines Tensors zweiter Stufe . . . 99
moment equilibrium conditon - Momentengleichgewichtsbedingung . . . . . . . . . . . . . . . . . 120
moving trihedron - begleitendes Dreibein . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
multiple roots - Mehrfachnullstellen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
multiplicative identity - multiplikationsneutrales Element . . . . . . . . . . . . . . . . . . . . . 45
multiplicative inverse - inverses Element der Multiplikation . . . . . . . . . . . . . . . . . . 10
n-tuple - n-Tupel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
nabla operator - Nabla-Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
natural basis - natrliche Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
natural numbers - natrliche Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
naturals - natrliche Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
negative denite - negativ denit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Newtons relation - Vietaschen Wurzelstze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
non empty set - nicht leere Menge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
non-commutative - nicht-kommutativ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
noncommutative - nicht kommutativ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69, 106
nonsingular - regulr, nicht singulr . . . . . . . . . . . . . . . . . . . . . . . . 48, 59, 66
nonsingular square matrix - regulre quadratische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 55
nonsymmetric - unsymmetrisch, nicht symmetrisch . . . . . . . . . . . . . . . . . . . 69
nontrivial solution - nicht triviale Lsung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
norm - Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 65
norm of a tensor - norm eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
normal basis - normale Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
normal unit - Normaleneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
normal unit vector - Normaleneinheitsvektor . . . . . . . . . . . . . . . . . . . 121, 136, 138
normal vector - Normalenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 135
normed space - normierter Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
null mapping - Nullabbildung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
odd permutation - ungerade Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
250 Glossary English German
one - Einselement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
operation - Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
operation addition - Additionsoperation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
operation multiplication - Multplikationsoperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
order of a matrix - Ordnung einer Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
origin - Ursprung, Nullelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
orthogonal - orthogonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
orthogonal matrix - orthogonalen Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
orthogonal tensor - orthogonaler Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
orthogonal transformation - orthogonale Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . 57
orthonormal basis - orthonormale Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
outer product - ueres Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
overlined basis - berstrichene Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
parallelepiped - Parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
partial derivatives - partielle Ableitungen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
partial derivatives of base vectors -
partielle Ableitungen von Basisvektoren . . . . . . . . . . . . . 145
permutation symbol - Permutationssymbol . . . . . . . . . . . . . . . . . . . . . . . 87, 112, 128
permutation tensor - Permutationstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
permutations - Permutationen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
point of origin - Koordinatenursprung, -nullpunkt . . . . . . . . . . . . . . . . . . . . . 28
Poissons ratio - Querkontraktionszahl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
polar decomposition - polare Zerlegung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
polynomial factorization - Polynomzerlegung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
polynomial of n-th degree - Polynom n-ten Grades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
position vector - Ortsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135, 152
positive denite - positiv denit . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 61, 62, 111
positive metric - positive Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
positive norm - positive Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
post-multiplication - Nachmultiplikation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
potential character - Potentialeigenschaft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
power series - Potenzreihe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
pre-multiplication - Vormultiplikation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
principal axes - Hauptachsen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Glossary English German 251
principal axes problem - Hauptachsenproblem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
principal axis - Hauptachse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
principal stress directions - Hauptspannungsrichtungen . . . . . . . . . . . . . . . . . . . . . . . . . 122
principal stresses - Hauptspannungen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
product - Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
proper orthogonal tensor - eigentlich orthogonaler Tensor . . . . . . . . . . . . . . . . . . . . . . 116
quadratic form - quadratische Form. . . . . . . . . . . . . . . . . . . . . . . 26, 57, 62, 124
quadratic value of the norm - Normquadrate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
raising an index - Heben eines Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
range - Bildbereich. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
range - Urbild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
rank - Rang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
rational numbers - rationale Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Rayleigh quotient - Rayleigh-Quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67, 68
real numbers - reelle Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
rectangular matrix - Rechteckmatrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
reduction of rank - Rangabfall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Riesz representation theorem - Riesz Abbildungssatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
right-hand Cauchy strain tensor -
rechter Cauchy-Strecktensor . . . . . . . . . . . . . . . . . . . . . . . . 117
roots - Nullstellen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
rotated coordinate system - gedrehtes Koordiantensystem . . . . . . . . . . . . . . . . . . . . . . . 119
rotation matrix - Drehmatrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
rotation of a vector eld - Rotation eines Vektorfeldes . . . . . . . . . . . . . . . . . . . . . . . . . 150
rotation transformation - Drehtransformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
rotator - Rotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
row - Zeile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
row index - Zeilenindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
row matrix - Zeilenmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45
row vector - Zeilenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45
scalar eld - Skalarfeld. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
scalar function - Skalarfuntkion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
scalar invariant - skalare Invariante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
252 Glossary English German
scalar multiplication - skalare Multiplikation . . . . . . . . . . . . . . . . . . . . . . . . . 9, 12, 42
scalar multiplication identity - multiplikationsneutrales Element . . . . . . . . . . . . . . . . . . . . . 10
scalar product - Skalarprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 25, 85, 96
scalar product of tensors - Skalarprodukt von Tensoren . . . . . . . . . . . . . . . . . . . . . . . . 110
scalar product of two dyads - Skalarprodukt zweier Dyadenprodukte . . . . . . . . . . . . . . . 111
scalar triple product - Spatprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88, 90, 152
scalar-valued function of multiple variables -
skalarwertige Funktion mehrerer Vernderlicher . . . . . . 134
scalar-valued scalar function - skalarwertige Skalarfunktion. . . . . . . . . . . . . . . . . . . . . . . . 133
scalar-valued vector function - skalarwertige Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . 143
Schwarz inequality - Schwarzsche Ungleichung . . . . . . . . . . . . . . . . . . . . . . 26, 111
second derivative - zweite Ableitung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
second order tensor - Tensor zweiter Stufe . . . . . . . . . . . . . . . . . . . . . . . . 96, 97, 127
second order tensor product - Produkt von Tensoren zweiter Stufe . . . . . . . . . . . . . . . . . 105
section surface - Schnittche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
semidenite - semidenit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Serret-Frenet equations - Frenetsche Formeln . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
set - Menge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
set theory - Mengenlehre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
shear stresses - Schubspannungen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
similar - hnlich, kogredient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55, 69
similarity transformation - hnlichkeitstransformation, kogrediente Transformation55
simple fourth order tensor - einfacher Tensor vierter Stufe . . . . . . . . . . . . . . . . . . . . . . . 129
simple second order tensor - einfacher Tensor zweiter Stufe . . . . . . . . . . . . . . . . . . . . 94, 99
simple third order tensor - einfacher Tensor dritter Stufe . . . . . . . . . . . . . . . . . . . . . . . 129
skew part of a tensor - schief- oder antisymmetrischer Anteil eines Tensors . . . 115
smmetric tensor - symmetrischer Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
space - Raum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
space curve - Raumkurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
space of continuous functions - Raum der stetige Funktionen. . . . . . . . . . . . . . . . . . . . . . . . . 14
space of square matrices - Raum der quadratischen Matrizen . . . . . . . . . . . . . . . . . . . . 14
span - Hlle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
special eigenvalue problem - spezielles Eigenwertproblem. . . . . . . . . . . . . . . . . . . . . . . . . 65
spectral norm - Spektralnorm, Hilbert-Norm . . . . . . . . . . . . . . . . . . . . . . . . . 22
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Glossary English German 253
square - quadratisch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
square matrix - quadratische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Stokes theorem - Stokescher Integralsatz, Integralsatz fr ein Kreuzprodukt
157
strain tensor - Verzerrungstensor, Dehnungstensor . . . . . . . . . . . . . . . . . . 129
stress state - Spannungszustand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
stress tensor - Spannungstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 129
stress vector - Spannungsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
subscript index - untenstehender Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
subset - Untermenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
summation convention - Summenkonvention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
superscript index - obenstehender Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
superset - Obermenge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
supremum - obere Schranke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
surface - Oberche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
surface element - Oberchenelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
surface integral - Oberchenintegral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
surjective - surjektiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
symbols - Symbole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
symmetric - symmetrisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 41
symmetric matrix - symmetrische Matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
symmetric metric - symmetrische Metrik. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
symmetric part - symmetrischer Anteil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
symmetric part of a tensor - symmetrischer Anteil eines Tensors . . . . . . . . . . . . . . . . . 115
tangent unit - Tangenteneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
tangent unit vector - Tangenteneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
tangent vector - Tangentenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Taylor series - Taylor-Reihe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 153
tensor - Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
tensor axioms - Axiome fr Tensoren. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
tensor eld - Tensorfeld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
tensor product - Tensorprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105, 106
tensor product of two dyads - Tensorprodukt zweier Dyadenprodukte . . . . . . . . . . . . . . 106
tensor space - Tensorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
254 Glossary English German
tensor with contravariant base vectors and covariant coordinates -
Tensor mit kontravarianten Basisvektoren und kovaranten
Koefzienten. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
tensor with covariant base vectors and contravariant coordinates -
Tensor mit kovarianten Basisvektoren und kontravaranten
Koefzienten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
tensor-valued function of multiple variables -
tensorwertige Funktion mehrerer Vernderlicher . . . . . . 134
tensor-valued scalar function - tensorwertige Skalarfunktion . . . . . . . . . . . . . . . . . . . . . . . 133
tensor-valued vector function - tensorwertige Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . 143
third order fundamental tensor - Fundamentaltensor dritter Stufe . . . . . . . . . . . . . . . . . . . . . 128
third order tensor - Tensor dritter Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
topology - Topologie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
torsion of a curve - Torsion einer Kurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
total differential - vollstndiges Differential . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
trace of a matrix - Spur einer Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
trace of a tensor - Spur eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
transformation matrix - Transformationsmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
transformation of base vectors - Transformation der Basisvektoren . . . . . . . . . . . . . . . . . . . 101
transformation of the metric coefcients -
Transformation der Metrikkoefzienten . . . . . . . . . . . . . . . 84
transformation relations - Transforamtionsformeln . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
transformation tensor - Transformationstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
transformed contravariant base vector -
transformierte kontravarianter Basisvektor . . . . . . . . . . . 103
transformed covariant base vector -
transformierte kovarianter Basisvektor . . . . . . . . . . . . . . . 103
transpose of a matrix - transponierte Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
transpose of a matrix product - transponiertes Matrizenprodukt . . . . . . . . . . . . . . . . . . . . . . 44
transpose of a tensor - Transponierter Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
triangle inequality - Dreiecksungleichung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16, 19
trivial solution - triviale Lsung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
union - Vereinigungsmenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
unit matrix - Einheitsmatrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
unitary space - unitrer Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
unitary vector space - unitrer Vektorraum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
usual scalar product - bliches Skalarprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Glossary English German 255
vector - Vektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 28, 127
vector eld - Vektorfeld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
vector function - Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
vector norm - Vektor-Norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 22
vector of associated direction - Richtungsvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
vector of position - Ortsvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
vector product - Vektorprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
vector space - Vektorraum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 49
vector space of linear mappings - Vektorraum der linearen Abbildungen. . . . . . . . . . . . . . . . . 33
vector-valued function - vektorwertige Funktion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
vector-valued function of multiple variables -
vektorwertige Funktion mehrerer Vernderlicher . . . . . . 134
vector-valued scalar function - vektorwertige Skalarfunktion . . . . . . . . . . . . . . . . . . . . . . . 133
vector-valued vector function - vektorwertige Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . 143
visual space - Anschauungsraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
volume - Volumen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
volume element - Volumenelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
volume integral - Volumenintegral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
volumetric matrix - Kugelmatrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
volumetric part of a tensor - Kugelanteil eines Tensors. . . . . . . . . . . . . . . . . . . . . . . . . . . 113
von Mises iteration - von Mises Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
whole numbers - natrliche Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Youngs modulus - Elastizittsmodul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
zero element - Nullelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
zero vector - Nullvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
zeros - Nullstellen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
256 Glossary English German
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Glossary German English
L2-Norm - L2-norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
l2-Norm, euklidische Norm - l2-norm, Euclidian norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
n-Tupel - n-tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Abbildung - map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Abbildung - mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Ableitung einer skalaren Gre - derivative of a scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Ableitung eines Tensors - derivative of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Ableitung eines Vektors - derivative of a vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Ableitung nach einer skalaren Gre -
derivative w.r.t. a scalar variable . . . . . . . . . . . . . . . . . . . . . 133
Ableitungen - derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Ableitungen von Basisvektoren - derivatives of base vectors . . . . . . . . . . . . . . . . . . . . . 141, 145
Abstand - distance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
additionsneutrales Element - additive identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10, 42
Additionsoperation - operation addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
additiv - additive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
hnlich, kogredient - similar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55, 69
hnlichkeitstransformation, kogrediente Transformation -
similarity transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
ueres Produkt - outer product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
afner Vektor - afne vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
afner Vektorraum - afne vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 54
allgemeines Eigenwertproblem - general eigenvalue problem. . . . . . . . . . . . . . . . . . . . . . . . . . 69
allgemeines Permutationssymbol -
general permutation symbol . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Anschauungsraum - visual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
257
258 Glossary German English
antisymmetrischer Anteil - antisymmetric part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
antisymmetrischer Anteil eines Tensors -
antisymmetric part of a tensor . . . . . . . . . . . . . . . . . . . . . . . 115
assoziativ - associative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
assoziativ bzgl. Matrizenaddition -
associative under matrix addition . . . . . . . . . . . . . . . . . . . . . 42
Assoziativgesetz - associative rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Axiome fr Tensoren - tensor axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Basis - basis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Basis eines Vektorraums - basis of the vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Basisvektoren - base vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 87
begleitendes Dreibein - moving trihedron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Betrag - absolute value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Betrag eines Tensors - absolute value of a tensor . . . . . . . . . . . . . . . . . . . . . . 111, 113
bijektiv - bijective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Bildbereich - image set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Bildbereich - range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
bilinear - bilinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Bilinearform - bilinear form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Binormaleneinheitsvektor - binormal unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Binormaleneinheitsvektor - binormal unit vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Cauchy-Spannungstensor - Cauchy stress tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 120
Cayley-Hamilton Theorem - Cayley-Hamilton Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . 71
charakteristische Gleichung - characteristic equation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
charakteristische Matrix - characteristic matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
charakteristisches Polynom - characteristic polynomial . . . . . . . . . . . . . . . . . . . . 56, 65, 123
Christoffel-Symbol - Christoffel symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
denite Metrik - denite metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
denite Norm - denite norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Denitionsbereich - domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Deformationsgradient - deformation gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Determinante - determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50, 65, 89
Determinante der Jacobimatrix - determinant of the Jacobian matrix . . . . . . . . . . . . . . . . . . 140
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
Glossary German English 259
Determinante der kontravarianten Metrikkoefzienten -
determinant of the contravariant metric coefcients. . . . . 83
Determinante der kovarianten Metrikkoefzienten -
determinant of the contravariant metric coefcients. . . . . 83
Determinante eines Tensors - determinant of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Determinantenentwicklungssatz mit Unterdeterminanten -
determinant expansion by minors . . . . . . . . . . . . . . . . . . . . . 51
Deviator eines Tensors - deviator part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Deviatormatrix - deviator matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Diagonalmatrix - diagonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 43
differentielles Flchenelement - differential element of area . . . . . . . . . . . . . . . . . . . . . . . . . 139
Dimension - dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13, 14
direkte Methode - direct method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
direktes Produkt - direct product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
diskrete Metrik - discrete metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
distributiv - distributive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Distributivgesetz - distributive law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Distributivgesetz - distributive w.r.t. addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Divergenz eines Tensorfeldes - divergence of a tensor eld . . . . . . . . . . . . . . . . . . . . . . . . . 147
Divergenz eines Vektorfeldes - divergence of a vector eld . . . . . . . . . . . . . . . . . . . . . . . . . 147
Divergenztheorem - divergence theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Drehmatrix - rotation matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Drehtransformation - rotation transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Dreiecksungleichung - triangle inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16, 19
dualer Vektoraum, Dualraum - dual vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Dualraum - dual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36, 97
dyadisches Produkt - dyadic product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9496
eigentlich orthogonaler Tensor - proper orthogonal tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Eigenvektor - eigenvector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 120
Eigenvektormatrix, Modalmatrix -
eigenvector matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Eigenwert - eigenvalue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 120
Eigenwerte - eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 56
Eigenwertproblem - eigenvalue problem. . . . . . . . . . . . . . . . . . . . . 22, 65, 122, 123
einfacher Tensor dritter Stufe - simple third order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
260 Glossary German English
einfacher Tensor vierter Stufe - simple fourth order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 129
einfacher Tensor zweiter Stufe - simple second order tensor . . . . . . . . . . . . . . . . . . . . . . . 94, 99
Einheitsmatrix - identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 45
Einheitsmatrix - unit matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Einheitsmatrix, Identitt - identity matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Einheitstensor - identity tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Einselement - one . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
elastisch - elastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Elastizittsmodul - Youngs modulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Elastizittstensor - elasticity tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Elastizittstheorie - elasticity theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Elemente - elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
endlich - nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Euklidische Matrixnorm - Euklidian matrix norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
euklidische Norm - Euclidean norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 30, 85
euklidische Vektoren - Euclidean vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Euklidischer Raum - Euclidean space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
euklidischer Vektorraum - Euclidean vector space . . . . . . . . . . . . . . . . . . . . . . 26, 29, 143
Feld - eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Finite-Element-Methode - nite element method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Flchenvektor - area vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Formnderungsenergie - deformation energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Frechet Ableitung - Frechet derivative. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
freier Index - free indices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Frenetsche Formeln - Serret-Frenet equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Fundamentaltensor - fundamental tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Fundamentaltensor dritter Stufe -
third order fundamental tensor . . . . . . . . . . . . . . . . . . . . . . 128
Funktion - function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
ganze Zahlen - integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Gaussscher Integralsatz - Gausss theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155, 158
Gausche Transformation - Gauss transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
gedrehtes Koordiantensystem - rotated coordinate system. . . . . . . . . . . . . . . . . . . . . . . . . . . 119
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
Glossary German English 261
gemischte Formulierung eines Tensors zweiter Stufe -
mixed formulation of a second order tensor . . . . . . . . . . . . 99
gemischte Komponenten - mixed components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
gerade Permutation - even permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Gesamtnorm - absolute norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Gleichgewicht der ueren Krfte -
equilibrium system of external forces . . . . . . . . . . . . . . . . . 96
Gleichgewichtsbedingungen - equilibrium conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Gradient - gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Gradient eines Ortsvektors - gradient of a vector of position . . . . . . . . . . . . . . . . . . . . . . 144
Hauptachse - principal axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Hauptachsen - principal axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Hauptachsenproblem - principal axes problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Hauptdiagonale - main diagonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 65
Hauptspannungen - principal stresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Hauptspannungsrichtungen - directions of principal stress . . . . . . . . . . . . . . . . . . . . . . . . 120
Hauptspannungsrichtungen - principal stress directions . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Heben eines Index - raising an index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
homogen - homogeneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
homogene Norm - homogeneous norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
homogenes lineares Gleichungssystem -
homogeneous linear equation system . . . . . . . . . . . . . . . . . 65
Homomorphismus - homomorphism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
homomorph - homeomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30, 97
Homomorphismus - homeomorphism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Hookesche Gesetz - Hookes law. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Hldersche Ungleichung - Hlder sum inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Hlle - span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
innitesimal - innitesimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
innitesimaler Tetraeder - innitesimal tetrahedron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
injektiv - injective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
innerer Produktraum - inner prodcut space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
innerer Produktraum - inner product space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 27
inneres Produkt - inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 85
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
262 Glossary German English
inneres Produkt von Tensoren - inner product of tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Integralnorm, L1-Norm - L1-norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Integralsatz - integral theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
invariant - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Invariante - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120, 123, 148
Invarianz - invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Inverse - inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8, 34
inverse Beziehung - inverse relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
inverse Matrix - inverse of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
inverse Transformation - inverse transformation . . . . . . . . . . . . . . . . . . . . . . . . . 101, 103
inverser Tensor - inverse of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
inverses Element der Addition - additive inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10, 42
inverses Element der Addition - inverse w.r.t. addition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
inverses Element der Multiplikation -
inverse w.r.t. multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . 10
inverses Element der Multiplikation -
multiplicative inverse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
invertierbar - invertible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
isomorph - isomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29, 35
Isomorphimus - isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
isotrop - isotropic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
isotroper Tensor - isotropic tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Iterationsvorschrift - iterative process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Jacobi-Determinante - Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
kartesische Basis - Cartesian basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
kartesische Basisvektoren - Cartesian base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
kartesische Komponenten des Permutationstensor -
Cartesian components of a permutation tensor . . . . . . . . . 87
kartesische Koordinaten - Cartesian coordinates . . . . . . . . . . . . . . . . . . . . . . . 78, 82, 144
Kofaktor, algebraisches Komplement -
cofactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Kombination - combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 34, 54
kommutativ - commutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
kommutativ bzgl. Matrizenaddition -
commutative under matrix addition . . . . . . . . . . . . . . . . . . . 42
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
Glossary German English 263
kommutative Matrix - commutative matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Kommutativgesetz - commutative rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
komplexe Zahlen - complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Komponenten - components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 65
Komponenten des Christoffel-Symbols -
components of the Christoffel symbol . . . . . . . . . . . . . . . . 142
Komponenten des Permutationstensors -
components of the permutation tensor . . . . . . . . . . . . . . . . . 87
Komposition - composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34, 54, 106
kongruent - congruent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56, 63
Kongruenztransformation, kontragrediente Transformation -
congruence transformation . . . . . . . . . . . . . . . . . . . . . . . 56, 63
konjugiert komplexe Eigenwerte -
complex conjugate eigenvalues . . . . . . . . . . . . . . . . . . . . . . 124
Kontinuum - continuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
kontravariante Basisvektoren - contravariant base vectors . . . . . . . . . . . . . . . . . . . . . . . 81, 139
kontravariante Basisvektoren der natrlichen Basis -
contravariant base vectors of the natural basis. . . . . . . . . 141
kontravariante Koordinaten, Koefzienten -
contravariant coordinates . . . . . . . . . . . . . . . . . . . . . . . . . 80, 84
kontravariante Metrikkoefzienten -
contravariant metric coefcients . . . . . . . . . . . . . . . . . . 82, 83
kontravariantes -Symbol - contravariant symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Koordinaten - coordinates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Koordinatenursprung, -nullpunkt -
point of origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
kovariante Ableitung - covariant derivative. . . . . . . . . . . . . . . . . . . . . . . . . . . . 146, 149
kovariante Basisvektoren - covariant base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 80, 138
kovariante Basisvektoren der natrlichen Basis -
covariant base vectors of the natural basis . . . . . . . . . . . . 140
kovariante Koordinaten, Koefzienten -
covariant coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81, 84
kovariante Metrikkoefzienten - covariant metric coefcients . . . . . . . . . . . . . . . . . . . . . . 80, 83
kovariantes -Symbol - covariant symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Kreuzprodukt - cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87, 90, 96
Kronecker-Delta - Kronecker delta. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 79, 119
krummlinige Koordinaten - curvilinear coordinates. . . . . . . . . . . . . . . . . . . . . . . . . 139, 144
krummliniges Koordinatensystem -
curvilinear coordinate system . . . . . . . . . . . . . . . . . . . . . . . 139
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
264 Glossary German English
Krftegleichgewicht - equlibirum system of forces . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Krmmung - curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Krmmung einer Kurve - curvature of a curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Kugelanteil eines Tensors - volumetric part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Kugelmatrix - volumetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Krper - eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Laplace-Operator - Laplace operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Laplace-Operator eines Skalarfeldes -
laplacian of a scalar eld . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Laplace-Operator eines Tensorfeldes -
laplacian of a tensor eld . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Laplace-Operator eines Vektorfeldes -
laplacian of a vector eld . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
leere Menge - empty set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
linear - linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
linear abhngig - linearly dependent . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 23, 49
linear unabhngig - linearly independent . . . . . . . . . . . . . . . . . . . 15, 23, 48, 59, 66
lineare Abbildung - linear mapping . . . . . . . . . . . . . . . . . . . . . 32, 54, 97, 105, 121
lineare Abhngigkeit - linear dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30, 62
lineare Algebra - linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
lineare Mannigfaltigkeit - linear manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
lineare Transformation - linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
lineare Unabhngigkeit - linear independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30
linearer Operator - linear operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
linearer Raum - linear space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
linearer Vektorraum - linear vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
linearerUnterraum - linear subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
lineares Gleichungssytem - linear equation system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Linearform - linear form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Linearitt - linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 34
Linearkombination - linear combination . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 49, 71
Linienelement - line element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
linker Cauchy-Strecktensor - left-hand Cauchy strain tensor . . . . . . . . . . . . . . . . . . . . . . 117
Matrix - matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
Glossary German English 265
Matrix-Norm - matrix norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 22
Matrizenalgebra - matrix calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Matrizenmultiplikation - matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42, 54
Maximumnorm, -Norm - l-innity-norm, maximum-norm. . . . . . . . . . . . . . . . . . . . . . 19
Maximumsnorm, p-Norm - p-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Maximumsnorm, p-Norm - maximum-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Mehrfachnullstellen - multiple roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Menge - set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Mengenlehre - set theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Metrik - metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Metrikkoefzienten - metric coefcients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Metriktensor mit kovarianten Koefzienten -
metric tensor of covariant coefcients . . . . . . . . . . . . . . . . 102
metrischer Raum - metric space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
von Mises Iteration - von Mises iteration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Mittelwert - mean value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Momentengleichgewichtsbedingung -
equilibrium conditon of moments . . . . . . . . . . . . . . . . . . . 120
Momentengleichgewichtsbedingung -
moment equilibrium conditon . . . . . . . . . . . . . . . . . . . . . . . 120
multiplikationsneutrales Element -
multiplicative identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
multiplikationsneutrales Element -
scalar multiplication identity . . . . . . . . . . . . . . . . . . . . . . . . . 10
Multplikationsoperation - operation multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
n-Tupel - n-tuple. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Nabla-Operator - nabla operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Nachmultiplikation - post-multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
natrliche Basis - natural basis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
natrliche Zahlen - natural numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
natrliche Zahlen - naturals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
natrliche Zahlen - whole numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
negativ denit - negative denite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
neutrales Element der Addition - identity element w.r.t. addition . . . . . . . . . . . . . . . . . . . . . . . 10
neutrales Element der Multiplikation -
identity element w.r.t. scalar multiplication . . . . . . . . . . . . 10
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
266 Glossary German English
nicht kommutativ - noncommutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69, 106
nicht leere Menge - non empty set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
nicht triviale Lsung - nontrivial solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
nicht-kommutativ - non-commutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Norm - norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 65
norm eines Tensors - norm of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
normale Basis - normal basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Normaleneinheitsvektor - normal unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Normaleneinheitsvektor - normal unit vector . . . . . . . . . . . . . . . . . . . . . . . . 121, 136, 138
Normalenvektor - normal vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 135
normierter Raum - normed space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Normquadrate - quadratic value of the norm . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Nullabbildung - null mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Nullelement - zero element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Nullstellen - roots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Nullstellen - zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Nullvektor - zero vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
obenstehender Index - superscript index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
obere Schranke - supremum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Oberche - surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Oberchenelement - surface element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Oberchenintegral - surface integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Obermenge - superset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Operation - operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Ordnung einer Matrix - order of a matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
orthogonal - orthogonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
orthogonale Transformation - orthogonal transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
orthogonalen Matrix - orthogonal matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
orthogonaler Tensor - orthogonal tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
orthonormale Basis - orthonormal basis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Ortsvektor - position vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135, 152
Ortsvektoren - vector of position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Parallelepiped - parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
Glossary German English 267
partielle Ableitungen - partial derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
partielle Ableitungen von Basisvektoren -
partial derivatives of base vectors . . . . . . . . . . . . . . . . . . . . 145
Permutationen - permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Permutationssymbol - permutation symbol . . . . . . . . . . . . . . . . . . . . . . . . 87, 112, 128
Permutationstensor - permutation tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
polare Zerlegung - polar decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Polynom n-ten Grades - polynomial of n-th degree . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Polynomzerlegung - polynomial factorization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
positiv denit - positive denite . . . . . . . . . . . . . . . . . . . . . . . . . 25, 61, 62, 111
positive Metrik - positive metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
positive Norm - positive norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Potentialeigenschaft - potential character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Potenzreihe - power series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Produkt - product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Produkt von Tensoren zweiter Stufe -
second order tensor product . . . . . . . . . . . . . . . . . . . . . . . . . 105
Punktprodukt - dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
quadratisch - square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
quadratische Form - quadratic form . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 57, 62, 124
quadratische Matrix - square matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Querkontraktionszahl - Poissons ratio. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Rang - rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Rangabfall - reduction of rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
rationale Zahlen - rational numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Raum - space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Raum der quadratischen Matrizen -
space of square matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Raum der stetige Funktionen - space of continuous functions . . . . . . . . . . . . . . . . . . . . . . . . 14
Raumche, gekrmmte Oberche -
curved surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Raumkurve - space curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Rayleigh-Quotient - Rayleigh quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67, 68
Rechenregeln fr Skalarprodukte von Tensoren -
identities for scalar products of tensors . . . . . . . . . . . . . . . 110
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
268 Glossary German English
Rechenregeln fr Tensorprodukte -
identities for tensor products . . . . . . . . . . . . . . . . . . . . . . . . 106
Rechteckmatrix - rectangular matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
rechter Cauchy-Strecktensor - right-hand Cauchy strain tensor . . . . . . . . . . . . . . . . . . . . . 117
reelle Zahlen - real numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
regulr, nicht singulr - nonsingular. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48, 59, 66
regulre quadratische Matrix - nonsingular square matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Richtungsvektoren - vector of associated direction . . . . . . . . . . . . . . . . . . . . . . . 120
Riesz Abbildungssatz - Riesz representation theorem. . . . . . . . . . . . . . . . . . . . . . . . . 36
Rotation - curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Rotation eines Vektorfeldes - rotation of a vector eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Rotor - rotator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
scheifsymmetrische Matrix - antisymmetric matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
schief- oder antisymmetrischer Anteil eines Tensors -
skew part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
schiefsymmetrisch - antisymmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Schnittche - section surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Schnittmenge - intersection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Schubspannungen - shear stresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Schwarzsche oder Cauchy-Schwarzsche Ungleichung -
Cauchys inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Schwarzsche Ungleichung - Schwarz inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 111
semidenit - semidenite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Senken eines Index - lowering an index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
skalare Invariante - scalar invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
skalare Multiplikation - scalar multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 12, 42
Skalarfeld - scalar eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Skalarfuntkion - scalar function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Skalarprodukt - scalar product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 25, 85, 96
Skalarprodukt von Tensoren - scalar product of tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Skalarprodukt zweier Dyadenprodukte -
scalar product of two dyads . . . . . . . . . . . . . . . . . . . . . . . . . 111
skalarwertige Funktion mehrerer Vernderlicher -
scalar-valued function of multiple variables . . . . . . . . . . 134
skalarwertige Skalarfunktion - scalar-valued scalar function . . . . . . . . . . . . . . . . . . . . . . . . 133
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
Glossary German English 269
skalarwertige Vektorfunktion - scalar-valued vector function. . . . . . . . . . . . . . . . . . . . . . . . 143
Spalte - column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Spaltenindex - column index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Spaltenmatrix - column matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 40, 46
Spaltennorm - maximum absolute column sum norm. . . . . . . . . . . . . . . . . 22
Spaltenvektor - column vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 40, 46, 59
Spannungstensor - stress tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 129
Spannungsvektor - stress vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Spannungszustand - stress state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Spatprodukt - scalar triple product . . . . . . . . . . . . . . . . . . . . . . . . . 88, 90, 152
Spektralnorm, Hilbert-Norm - spectral norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
spezielles Eigenwertproblem - special eigenvalue problem . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Spur einer Matrix - trace of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Spur eines Tensors - trace of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Stokescher Integralsatz, Integralsatz fr ein Kreuzprodukt -
Stokes theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
stummer Index - dummy index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Summenkonvention - summation convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Summennorm, l1-Norm - l1-norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
surjektiv - surjective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Symbole - symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
symmetrisch - symmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 41
symmetrische Matrix - symmetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
symmetrische Metrik - symmetric metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
symmetrischer Anteil - symmetric part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
symmetrischer Anteil eines Tensors -
symmetric part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 115
symmetrischer Tensor - smmetric tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Tangenteneinheitsvektor - tangent unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Tangenteneinheitsvektor - tangent unit vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Tangentenvektor - tangent vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Taylor-Reihe - Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 153
Tensor - tensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Tensor dritter Stufe - third order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
270 Glossary German English
Tensor erster Stufe - rst order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Tensor hherer Stufe - higher order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Tensor mit kontravarianten Basisvektoren und kovaranten Koefzienten -
tensor with contravariant base vectors and covariant coordi-
nates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Tensor mit kovarianten Basisvektoren und kontravaranten Koefzienten -
tensor with covariant base vectors and contravariant coordi-
nates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Tensor vierter Stufe - fourth order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Tensor zweiter Stufe - second order tensor . . . . . . . . . . . . . . . . . . . . . . . . . 96, 97, 127
Tensorfeld - tensor eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Tensorprodukt - tensor product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105, 106
Tensorprodukt zweier Dyadenprodukte -
tensor product of two dyads. . . . . . . . . . . . . . . . . . . . . . . . . 106
Tensorraum - tensor space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
tensorwertige Funktion mehrerer Vernderlicher -
tensor-valued function of multiple variables . . . . . . . . . . 134
tensorwertige Skalarfunktion - tensor-valued scalar function. . . . . . . . . . . . . . . . . . . . . . . . 133
tensorwertige Vektorfunktion - tensor-valued vector function . . . . . . . . . . . . . . . . . . . . . . . 143
Topologie - topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Torsion einer Kurve - torsion of a curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Transforamtionsformeln - transformation relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Transformation der Basisvektoren -
transformation of base vectors . . . . . . . . . . . . . . . . . . . . . . 101
Transformation der Metrikkoefzienten -
transformation of the metric coefcients. . . . . . . . . . . . . . . 84
Transformationsmatrix - transformation matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Transformationstensor - transformation tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
transformierte kontravarianter Basisvektor -
transformed contravariant base vector . . . . . . . . . . . . . . . . 103
transformierte kovarianter Basisvektor -
transformed covariant base vector . . . . . . . . . . . . . . . . . . . 103
transponierte Matrix - matrix transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
transponierte Matrix - transpose of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Transponierter Tensor - transpose of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
transponiertes Matrizenprodukt -
transpose of a matrix product . . . . . . . . . . . . . . . . . . . . . . . . 44
triviale Lsung - trivial solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
Glossary German English 271
berstrichene Basis - overlined basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
bliches Skalarprodukt - usual scalar product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Umkehrung - inversion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
ungerade Permutation - odd permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
unitrer Raum - unitary space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
unitrer Vektorraum - unitary vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
unsymmetrisch, nicht symmetrisch -
nonsymmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
untenstehender Index - subscript index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Untermenge - subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Urbild - range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Ursprung, Nullelement - origin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Vektor - vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 28, 127
Vektor-Norm - vector norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 22
Vektorfeld - vector eld. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Vektorfunktion - vector function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Vektorprodukt - vector product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Vektorraum - vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 49
Vektorraum der linearen Abbildungen -
vector space of linear mappings . . . . . . . . . . . . . . . . . . . . . . 33
vektorwertige Funktion - vector-valued function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
vektorwertige Funktion mehrerer Vernderlicher -
vector-valued function of multiple variables . . . . . . . . . . 134
vektorwertige Skalarfunktion - vector-valued scalar function. . . . . . . . . . . . . . . . . . . . . . . . 133
vektorwertige Vektorfunktion - vector-valued vector function . . . . . . . . . . . . . . . . . . . . . . . 143
Vereinigungsmenge - union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
vertrglich - compatible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Vertrglichkeit von Vektor- und Matrix-Norm -
compatibility of vector and matrix norms . . . . . . . . . . . . . . 22
Verzerrungstensor, Dehnungstensor -
strain tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Vietaschen Wurzelstze - Newtons relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
vollstndige Tensor zweiter Stufe -
complete second order tensor . . . . . . . . . . . . . . . . . . . . . . . . 99
vollstndiger Tensor dritter Stufe -
complete third order tensor . . . . . . . . . . . . . . . . . . . . . . . . . 129
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
272 Glossary German English
vollstndiger Tensor vierter Stufe -
complete fourth order tensor . . . . . . . . . . . . . . . . . . . . . . . . 129
vollstndiges Differential - exact differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 135
vollstndiges Differential - total differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Volumen - volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Volumenelement - volume element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Volumenintegral - volume integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Vormultiplikation - pre-multiplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Zeile - row. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Zeilenindex - row index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Zeilenmatrix - row matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45
Zeilennorm - maximum absolute row sum norm. . . . . . . . . . . . . . . . . . . . 22
Zeilenvektor - row vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45
zweite Ableitung - second derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
TU Braunschweig, CSE Vector and Tensor Calculus 22. Oktober 2003
Index
L1-norm, 19
L2-norm, 19
l1-norm, 19
l2-norm, Euclidian norm, 19
n-tuple, 28
p-norm, 19
absolute norm, 22
absolute value, 85
absolute value of a tensor, 111, 113
addition, 12, 13
additive, 32
additive identity, 10, 12, 42
additive inverse, 10, 12, 42
afne vector, 28
afne vector space, 28, 54
antisymmetric, 41
antisymmetric matrix, 41
antisymmetric part, 44
antisymmetric part of a tensor, 115
area vector, 152
associative, 42
associative rule, 10, 12
associative under matrix addition, 42
base vectors, 31, 87
basis, 31
basis of the vector space, 15
bijective, 8
bilinear, 25
bilinear form, 26
binormal unit, 137
binormal unit vector, 137
Cantor, 6
Cartesian base vectors, 88
Cartesian basis, 149
Cartesian components of a permutation ten-
sor, 87
Cartesian coordinates, 78, 82, 144
Cauchy, 96
Cauchy stress tensor, 96, 120
Cauchys inequality, 21
Cayley-Hamilton Theorem, 71
characteristic equation, 65
characteristic matrix, 70
characteristic polynomial, 56, 65, 123
Christoffel symbol, 142
cofactor, 51
column, 40
column index, 40
column matrix, 28, 40, 46
column vector, 28, 40, 46, 59
combination, 9, 34, 54
commutative, 43
commutative matrix, 43
commutative rule, 10, 12
commutative under matrix addition, 42
compatibility of vector and matrix norms, 22
compatible, 22
complete fourth order tensor, 129
complete second order tensor, 99
complete third order tensor, 129
complex conjugate eigenvalues, 124
complex numbers, 7
components, 31, 65
components of the Christoffel symbol, 142
components of the permutation tensor, 87
composition, 34, 54, 106
congruence transformation, 56, 63
congruent, 56, 63
273
274 Index
continuum, 96
contravariant symbol, 93
contravariant base vectors, 81, 139
contravariant base vectors of the natural ba-
sis, 141
contravariant coordinates, 80, 84
contravariant metric coefcients, 82, 83
coordinates, 31
covariant symbol, 92
covariant base vectors, 80, 138
covariant base vectors of the natural basis,
140
covariant coordinates, 81, 84
covariant derivative, 146, 149
covariant metric coefcients, 80, 83
cross product, 87, 90, 96
curl, 150
curvature, 135
curvature of a curve, 136
curved surface, 138
curvilinear coordinate system, 139
curvilinear coordinates, 139, 144
denite metric, 16
denite norm, 18
deformation energy, 129
deformation gradient, 118
derivative of a scalar, 133
derivative of a tensor, 134
derivative of a vector, 133
derivative w.r.t. a scalar variable, 133
derivatives, 133
derivatives of base vectors, 141, 145
determinant, 50, 65, 89
determinant expansion by minors, 51
determinant of a tensor, 112
determinant of the contravariant metric coef-
cients, 83
determinant of the Jacobian matrix, 140
deviator matrix, 46
deviator part of a tensor, 113
diagonal matrix, 41, 43
differential element of area, 139
dimension, 13, 14
direct method, 68
direct product, 94
directions of principal stress, 120
discrete metric, 17
distance, 17
distributive, 42
distributive law, 10, 13
distributive w.r.t. addition, 10
distributive w.r.t. scalar addition, 13
distributive w.r.t. vector addition, 13
divergence of a tensor eld, 147
divergence of a vector eld, 147
divergence theorem, 156
domain, 8
dot product, 85
dual space, 36, 97
dual vector space, 36
dummy index, 78
dyadic product, 9496
eigenvalue, 65, 120
eigenvalue problem, 22, 65, 122, 123
eigenvalues, 22, 56
eigenvector, 65, 120
eigenvector matrix, 70
Einstein, 78
elastic, 129
elasticity tensor, 129
elasticity theory, 129
elements, 6
empty set, 7
equilibrium conditions, 96
equilibrium conditon of moments, 120
equilibrium system of external forces, 96
equlibirum system of forces, 96
Euclidean norm, 22, 30, 85
Euclidean space, 17
Euclidean vector, 143
Euclidean vector space, 26, 29, 143
Euklidian matrix norm, 60
even permutation, 50
exact differential, 133, 135
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Index 275
eld, 10, 12, 143
nite, 13
nite element method, 57
rst order tensor, 127
fourth order tensor, 129
Frechet derivative, 143
free indices, 78
function, 8
fundamental tensor, 150
Gauss, 60
Gauss transformation, 59
Gausss theorem, 155, 158
general eigenvalue problem, 69
general permutation symbol, 92
gradient, 143
gradient of a vector of position, 144
higher order tensor, 127
homeomorphic, 30, 97
homeomorphism, 30
homogeneous, 32
homogeneous linear equation system, 65
homogeneous norm, 18
homomorphism, 32
Hookes law, 129
Hlder sum inequality, 21
identities for scalar products of tensors, 110
identities for tensor products, 106
identity, 12
identity element w.r.t. addition, 10
identity element w.r.t. scalar multiplication,
10, 12
identity matrix, 41, 45, 80
identity tensor, 112
image set, 8
innitesimal, 96
innitesimal tetrahedron, 96
injective, 8
inner prodcut space, 29
inner product, 25, 85
inner product of tensors, 110
inner product space, 26, 27
integers, 7
integral theorem, 156
intersection, 7
invariance, 57
invariant, 57, 120, 123, 148
inverse, 8, 34
inverse of a matrix, 48
inverse of a tensor, 115
inverse relation, 103
inverse transformation, 101, 103
inverse w.r.t. addition, 10, 12
inverse w.r.t. multiplication, 10
inversion, 48
invertible, 48
isomorphic, 29, 35
isomorphism, 35
isotropic, 129
isotropic tensor, 119
iterative process, 68
Jacobian, 140
Kronecker delta, 52, 79, 119
l-innity-norm, maximum-norm, 19
Laplace operator, 150
laplacian of a scalar eld, 150
laplacian of a tensor eld, 151
laplacian of a vector eld, 150
left-hand Cauchy strain tensor, 117
Leibnitz, 50
line element, 139
linear, 32
linear algebra, 3
linear combination, 15, 49, 71
linear dependence, 23, 30, 62
linear equation system, 48
linear form, 36
linear independence, 23, 30
linear manifold, 15
linear mapping, 32, 54, 97, 105, 121
linear operator, 32
linear space, 12
linear subspace, 15
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
276 Index
linear transformation, 32
linear vector space, 12
linearity, 32, 34
linearly dependent, 15, 23, 49
linearly independent, 15, 23, 48, 59, 66
lowering an index, 83
main diagonal, 41, 65
map, 8
mapping, 8
matrix, 40
matrix calculus, 28
matrix multiplication, 42, 54
matrix norm, 21, 22
matrix transpose, 41
maximum absolute column sum norm, 22
maximum absolute row sum norm, 22
maximum-norm, 20
mean value, 153
metric, 16
metric coefcients, 138
metric space, 17
metric tensor of covariant coefcients, 102
mixed components, 99
mixed formulation of a second order tensor,
99
moment equilibrium conditon, 120
moving trihedron, 137
multiple roots, 70
multiplicative identity, 45
multiplicative inverse, 10
n-tuple, 35
nabla operator, 146
natural basis, 140
natural numbers, 6, 7
naturals, 7
negative denite, 62
Newtons relation, 66
non empty set, 13
non-commutative, 45
noncommutative, 69, 106
nonsingular, 48, 59, 66
nonsingular square matrix, 55
nonsymmetric, 69
nontrivial solution, 65
norm, 18, 65
norm of a tensor, 111
normal basis, 103
normal unit, 136
normal unit vector, 121, 136, 138
normal vector, 96, 135
normed space, 18
null mapping, 33
odd permutation, 50
one, 10
operation, 9
operation addition, 10
operation multiplication, 10
order of a matrix, 40
origin, 12
orthogonal, 66
orthogonal matrix, 57
orthogonal tensor, 116
orthogonal transformation, 57
orthonormal basis, 144
outer product, 87
overlined basis, 103
parallelepiped, 88
partial derivatives, 134
partial derivatives of base vectors, 145
permutation symbol, 87, 112, 128
permutation tensor, 128
permutations, 50
point of origin, 28
Poissons ratio, 129
polar decomposition, 117
polynomial factorization, 66
polynomial of n-th degree, 65
position vector, 135, 152
positive denite, 25, 61, 62, 111
positive metric, 16
positive norm, 18
post-multiplication, 45
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
Index 277
potential character, 129
power series, 71
pre-multiplication, 45
principal axes, 120
principal axes problem, 65
principal axis, 65
principal stress directions, 122
principal stresses, 122
product, 10, 12
proper orthogonal tensor, 116
quadratic form, 26, 57, 62, 124
quadratic value of the norm, 60
raising an index, 83
range, 8
rank, 48
rational numbers, 7
Rayleigh quotient, 67, 68
real numbers, 7
rectangular matrix, 40
reduction of rank, 66
Riesz representation theorem, 36
right-hand Cauchy strain tensor, 117
roots, 65
rotated coordinate system, 119
rotation matrix, 58
rotation of a vector eld, 150
rotation transformation, 58
rotator, 116
row, 40
row index, 40
row matrix, 40, 45
row vector, 40, 45
scalar eld, 143
scalar function, 133
scalar invariant, 149
scalar muliplicative identity, 12
scalar multiplication, 9, 12, 13, 42
scalar multiplication identity, 10
scalar product, 9, 25, 85, 96
scalar product of tensors, 110
scalar product of two dyads, 111
scalar triple product, 88, 90, 152
scalar-valued function of multiple variables,
134
scalar-valued scalar function, 133
scalar-valued vector function, 143
Schwarz inequality, 26, 111
second derivative, 133
second order tensor, 96, 97, 127
second order tensor product, 105
section surface, 121
semidenite, 62
Serret-Frenet equations, 137
set, 6
set theory, 6
shear stresses, 121
similar, 55, 69
similarity transformation, 55
simple fourth order tensor, 129
simple second order tensor, 94, 99
simple third order tensor, 129
skew part of a tensor, 115
smmetric tensor, 124
space, 12
space curve, 135
space of continuous functions, 14
space of square matrices, 14
span, 15
special eigenvalue problem, 65
spectral norm, 22
square, 40
square matrix, 40
Stokes theorem, 157
strain tensor, 129
stress state, 96
stress tensor, 96, 129
stress vector, 96
subscript index, 78
subset, 7
summation convention, 78
superscript index, 78
superset, 7
supremum, 22
surface, 152
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003
278 Index
surface element, 152
surface integral, 152
surjective, 8
symbols, 6
symmetric, 25, 41
symmetric matrix, 41
symmetric metric, 16
symmetric part, 44
symmetric part of a tensor, 115
tangent unit, 135
tangent unit vector, 135
tangent vector, 135
Taylor series, 133, 153
tensor, 96
tensor axioms, 98
tensor eld, 143
tensor product, 105, 106
tensor product of two dyads, 106
tensor space, 94
tensor with contravariant base vectors and
covariant coordinates, 100
tensor with covariant base vectors and con-
travariant coordinates, 99
tensor-valued function of multiple variables,
134
tensor-valued scalar function, 133
tensor-valued vector function, 143
third order fundamental tensor, 128
third order tensor, 127
topology, 30
torsion of a curve, 137
total differential, 133
trace of a matrix, 43
trace of a tensor, 112
transformation matrix, 55
transformation of base vectors, 101
transformation of the metric coefcients, 84
transformation relations, 103
transformation tensor, 101
transformed contravariant base vector, 103
transformed covariant base vector, 103
transpose of a matrix, 41
transpose of a matrix product, 44
transpose of a tensor, 114
triangle inequality, 16, 18, 19
trivial solution, 65
union, 7
unit matrix, 80
unitary space, 27
unitary vector space, 29
usual scalar product, 25
vector, 12, 28, 127
vector eld, 143
vector function, 135
vector norm, 18, 22
vector of associated direction, 120
vector of position, 143
vector product, 87
vector space, 12, 49
vector space of linear mappings, 33
vector-valued function, 138
vector-valued function of multiple variables,
134
vector-valued scalar function, 133
vector-valued vector function, 143
visual space, 97
volume, 152
volume element, 152
volume integral, 152
volumetric matrix, 46
volumetric part of a tensor, 113
von Mises, 68
von Mises iteration, 68
whole numbers, 7
Youngs modulus, 129
zero element, 10
zero vector, 12
zeros, 65
TU Braunschweig, CSE Vector and Tensor Calculus 22nd October 2003