Sie sind auf Seite 1von 146

Technische Universität

Braunschweig

CSE – Computational Sciences in Engineering

An International, Interdisciplinary, and Bilingual Master of Science Programme

Introduction to
Continuum Mechanics

Vector and Tensor Calculus

Winter Semester 2002 / 2003

Franz-Joseph Barthold 1 Jörg Stieghan 2

22nd October 2003

1 Tel. ++49-(0)531-391-2240, Fax ++49-(0)531-391-2242, email fj.barthold@tu-bs.de


2 Tel. ++49-(0)531-391-2247, Fax ++49-(0)531-391-2242, email j.stieghan@tu-bs.de
Herausgeber Abstract
Prof. Dr.-Ing. Franz-Joseph Barthold, M.Sc.

Zusammenfassung

Organisation und Verwaltung


Dipl.-Ing. Jörg Stieghan, SFI
CSE – Computational Sciences in Engineering
Technische Universität Braunschweig
Bültenweg 17, 38 106 Braunschweig
Tel. ++49-(0)531-391-2247
Fax ++49-(0)531-391-2242
email j.stieghan@tu-bs.de

°2000
c Prof. Dr.-Ing. Franz-Joseph Barthold, M.Sc.
und
Dipl.-Ing. Jörg Stieghan, SFI
CSE – Computational Sciences in Engineering
Technische Universität Braunschweig
Bültenweg 17, 38 106 Braunschweig

Alle Rechte, insbesondere das der Übersetzung in fremde Sprachen, vorbehalten. Ohne Geneh-
migung der Autoren ist es nicht gestattet, dieses Heft ganz oder teilweise auf fotomechanischem
Wege (Fotokopie, Mikroskopie) zu vervielfältigen oder in elektronische Medien zu speichern.
Preface

Braunschweig, 22nd October 2003 Franz-Joseph Barthold and Jörg Stieghan


Contents

Contents VII

List of Figures IX

List of Tables XI

1 Introduction 1

2 Basics on Linear Algebra 3


2.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6 Normed Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.7 Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.8 Af£ne Vector Space and the Euclidean Vector Space . . . . . . . . . . . . . . . . 28
2.9 Linear Mappings and the Vector Space of Linear Mappings . . . . . . . . . . . . 32
2.10 Linear Forms and Dual Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . 36

3 Matrix Calculus 37
3.1 De£nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.2 Some Basic Identities of Matrix Calculus . . . . . . . . . . . . . . . . . . . . . 42
3.3 Inverse of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4 Linear Mappings of an Af£ne Vector Spaces . . . . . . . . . . . . . . . . . . . . 54
3.5 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.6 Matrix Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4 Vector and Tensor Algebra 75


4.1 Index Notation and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2 Products of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.3 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.4 Transformations and Products of Tensors . . . . . . . . . . . . . . . . . . . . . . 101

VII
VIII Contents

4.5 Special Tensors and Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 112


4.6 The Principal Axes of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.7 Higher Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

5 Vector and Tensor Analysis 131


5.1 Vector and Tensor Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.2 Derivatives and Operators of Fields . . . . . . . . . . . . . . . . . . . . . . . . . 143 List of Figures
5.3 Integral Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

6 Exercises 159
6.1 Application of Matrix Calculus on Bars and Plane Trusses . . . . . . . . . . . . 162 2.1 Triangle inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.2 Calculating a Structure with the Eigenvalue Problem . . . . . . . . . . . . . . . 174 2.2 Hölder sum inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.3 Fundamentals of Tensors in Index Notation . . . . . . . . . . . . . . . . . . . . 182 2.3 Vector space R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.4 Various Products of Second Order Tensors . . . . . . . . . . . . . . . . . . . . . 190 2.4 Af£ne vector space R 2af f ine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.5 Deformation Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 2.5 The scalar product in an 2-dimensional Euclidean vector space. . . . . . . . . . . 30
6.6 The Moving Trihedron, Derivatives and Space Curves . . . . . . . . . . . . . . . 198
6.7 Tensors, Stresses and Cylindrical Coordinates . . . . . . . . . . . . . . . . . . . 210 3.1 Matrix multiplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Matrix multiplication for a composition of matrices. . . . . . . . . . . . . . . . . 55
A Formulary 227 3.3 Orthogonal transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
A.1 Formulary Tensor Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
A.2 Formulary Tensor Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 4.1 Example of co- and contravariant base vectors in E2 . . . . . . . . . . . . . . . . 81
4.2 Special case of a Cartesian basis. . . . . . . . . . . . . . . . . . . . . . . . . . . 82
B Nomenclature 237 4.3 Projection of a vector v on the dircetion of the vector u. . . . . . . . . . . . . . . 86
4.4 Resulting stress vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
References 239 4.5 Resulting stress vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.6 The polar decomposition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Glossary English – German 241
4.7 An example of the physical components of a second order tensor. . . . . . . . . . 119
Glossary German – English 257 4.8 Principal axis problem with Cartesian coordinates. . . . . . . . . . . . . . . . . 120

5.1 The tangent vector in a point P on a space curve. . . . . . . . . . . . . . . . . . 136


Index 273
5.2 The moving trihedron. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.3 The covariant base vectors of a curved surface. . . . . . . . . . . . . . . . . . . 138
5.4 Curvilinear coordinates in a Cartesian coordinate system. . . . . . . . . . . . . . 140
5.5 The natural basis of a curvilinear coordinate system. . . . . . . . . . . . . . . . . 141
5.6 The volume element dV with the surface dA. . . . . . . . . . . . . . . . . . . . 152
5.7 The Volume, the surface and the subvolumes of a body. . . . . . . . . . . . . . . 154

6.1 A simple statically determinate plane truss. . . . . . . . . . . . . . . . . . . . . 162


6.2 Free-body diagram for the node 2. . . . . . . . . . . . . . . . . . . . . . . . . . 162
6.3 Free-body diagrams for the nodes 1 and 3. . . . . . . . . . . . . . . . . . . . . . 163
6.4 A simple statically indeterminate plane truss. . . . . . . . . . . . . . . . . . . . 164
6.5 Free-body diagrams for the nodes 2 and 4. . . . . . . . . . . . . . . . . . . . . . 165
6.6 An arbitrary bar and its local coordinate system x̃, ỹ. . . . . . . . . . . . . . . . 166
6.7 An arbitrary bar in a global coordinate system. . . . . . . . . . . . . . . . . . . . 167

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 IX


X List of Figures

6.8 The given structure of rigid bars. . . . . . . . . . . . . . . . . . . . . . . . . . . 174


6.9 The free-body diagrams of the subsystems left of node C, and right of node D
after the excursion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.10 The free-body diagram of the complete structure after the excursion. . . . . . . . 176
6.11 Matrix multiplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Example of co- and contravariant base vectors in E2 . . . . . . . . . . . . . . .
6.12
6.13 The given spiral staircase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
184
198
List of Tables
6.14 The winding up of the given spiral staircase. . . . . . . . . . . . . . . . . . . . . 199
6.15 An arbitrary line element with the forces, and moments in its sectional areas. . . 204
6.16 The free-body diagram of the loaded spiral staircase. . . . . . . . . . . . . . . . 207
6.17 The given cylindrical shell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 2.1 Compatibility of norms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 XI


XII List of Tables

Chapter 1

Introduction

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 1


2 Chapter 1. Introduction

Chapter 2

Basics on Linear Algebra

For example about vector spaces H ALMOS [6], and A BRAHAM, M ARSDEN, and R ATIU [1].
And in german DE B OER [3], and S TEIN ET AL . [13].
In german about linear algebra JÄNICH [8], F ISCHER [4], F ISCHER [9], and B EUTELSPACHER
[2].

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 3


4 Chapter 2. Basics on Linear Algebra Chapter Table of Contents 5

2.6.6 Compatibility of Vector and Matrix Norms . . . . . . . . . . . . . . . 22


Chapter Table of Contents 2.6.7 Vector and Matrix Norms in Eigenvalue Problems . . . . . . . . . . . . 22
2.6.8 Linear Dependence and Independence . . . . . . . . . . . . . . . . . . 23
2.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.7 Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.1 Denotations and Symbols of Sets . . . . . . . . . . . . . . . . . . . . 6
2.7.1 De£nition of a Scalar Product . . . . . . . . . . . . . . . . . . . . . . 25
2.1.2 Subset, Superset, Union and Intersection . . . . . . . . . . . . . . . . 7
2.7.2 Examples of Scalar Products . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.3 Examples of Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.7.3 De£nition of an Inner Product Space . . . . . . . . . . . . . . . . . . . 26
2.2 Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.7.4 Examples of Inner Product Spaces . . . . . . . . . . . . . . . . . . . . 26
2.2.1 De£nition of a Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.7.5 Unitary Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.2 Injective, Surjective and Bijective . . . . . . . . . . . . . . . . . . . . 8
2.8 Af£ne Vector Space and the Euclidean Vector Space . . . . . . . . . . . . . 28
2.2.3 De£nition of an Operation . . . . . . . . . . . . . . . . . . . . . . . . 9
2.8.1 De£nition of an Af£ne Vector Space . . . . . . . . . . . . . . . . . . . 28
2.2.4 Examples of Operations . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.8.2 The Euclidean Vector Space . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.5 Counter-Examples of Operations . . . . . . . . . . . . . . . . . . . . . 9
2.8.3 Linear Independence, and a Basis of the Euclidean Vector Space . . . . 30
2.3 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.9 Linear Mappings and the Vector Space of Linear Mappings . . . . . . . . . 32
2.3.1 De£nition of a Field . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.9.1 De£nition of a Linear Mapping . . . . . . . . . . . . . . . . . . . . . 32
2.3.2 Examples of Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.9.2 The Vector Space of Linear Mappings . . . . . . . . . . . . . . . . . . 32
2.3.3 Counter-Examples of Fields . . . . . . . . . . . . . . . . . . . . . . . 11
2.9.3 The Basis of the Vector Space of Linear Mappings . . . . . . . . . . . 33
2.4 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.9.4 De£nition of a Composition of Linear Mappings . . . . . . . . . . . . 34
2.4.1 De£nition of a Linear Space . . . . . . . . . . . . . . . . . . . . . . . 12
2.9.5 The Attributes of a Linear Mapping . . . . . . . . . . . . . . . . . . . 34
2.4.2 Examples of Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . 14
2.9.6 The Representation of a Linear Mapping by a Matrix . . . . . . . . . . 35
2.4.3 Linear Subspace and Linear Manifold . . . . . . . . . . . . . . . . . . 15
2.9.7 The Isomorphism of Vector Spaces . . . . . . . . . . . . . . . . . . . 35
2.4.4 Linear Combination and Span of a Subspace . . . . . . . . . . . . . . 15
2.10 Linear Forms and Dual Vector Spaces . . . . . . . . . . . . . . . . . . . . . 36
2.4.5 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.10.1 De£nition of Linear Forms and Dual Vector Spaces . . . . . . . . . . . 36
2.4.6 A Basis of a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . 15
2.10.2 A Basis of the Dual Vector Space . . . . . . . . . . . . . . . . . . . . 36
2.5 Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5.1 De£nition of a Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5.2 Examples of Metrices . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.3 De£nition of a Metric Space . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.4 Examples of a Metric Space . . . . . . . . . . . . . . . . . . . . . . . 17
2.6 Normed Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6.1 De£nition of a Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6.2 De£nition of a Normed Space . . . . . . . . . . . . . . . . . . . . . . 18
2.6.3 Examples of Vector Norms and Normed Vector Spaces . . . . . . . . . 18
2.6.4 Hölder Sum Inequality and Cauchy’s Inequality . . . . . . . . . . . . . 20
2.6.5 Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
6 Chapter 2. Basics on Linear Algebra 2.1. Sets 7

2.1 Sets 2.1.2 Subset, Superset, Union and Intersection


A set A is called a subset of B, if and only if 2 , every element of A, is also included in B
2.1.1 Denotations and Symbols of Sets
A set M is a £nite or in£nite collection of objects, so called elements, in which order has no A ⊆ B ⇐⇒ (∀a ∈ A ⇒ a ∈ B) . (2.1.4)
signi£cance, and multiplicity is generally also ignored. The set theory was originally founded by The set B is called the superset of A
Cantor 1 . In advance the meanings of some often used symbols and denotations are given below B ⊇ A. (2.1.5)
...
The union C of two sets A and B is the set of all elements, that at least are an element of one of
• m1 ∈ M : m1 is an element of the set M. the sets A and B
/ M : m2 is not an element of the M.
• m2 ∈ C = A ∪ B = {c | (c ∈ A) ∨ (c ∈ B)} . (2.1.6)
The intersection C of two sets A and B is the set of all elements common to the sets A and B
• {. . .} : The term(s) or element(s) included in this type of brackets describe a set.
C = A ∩ B = {c | (c ∈ A) ∧ (c ∈ B)} . (2.1.7)
• {. . . | . . .} : The terms on the left-hand side of the vertical bar are the elements of the
given set and the terms on the right-hand side of the bar describe the characteristics of the
elements include in this set. 2.1.3 Examples of Sets
• ∨ : An "OR"-combination of two terms or elements. Example: The empty set. The empty set contains no elements and is denoted,

• ∧ : An "AND"-combination of two terms or elements. ∅ = { }. (2.1.8)

• ∀ : The following condition(s) should hold for all mentioned elements. Example: The set of natural numbers. The set of natural numbers, or just the naturals, N,
sometimes also the whole numbers, is de£ned by
• =⇒ This arrow means that the term on the left-hand side implies the term on the right-hand
side. N = {1, 2, 3, . . .} . (2.1.9)
Sets could be given by . . . Unfortunately, zero "0"is sometimes also included in the list of natural numbers, then the set N
• an enumeration of its elements, e.g. is given by
N0 = {0, 1, 2, 3, . . .} . (2.1.10)
M1 = {1, 2, 3, } . (2.1.1)
Example: The set of integers. The set of the integers Z is given by
The set M1 consists of the elements 1, 2, 3
Z = {z | (z = 0) ∨ (z ∈ N) ∨ (−z ∈ N)} . (2.1.11)
N = {1, 2, 3, . . .} . (2.1.2)
Example: The set of rational numbers. The set of rational numbers Q is described by
The set N includes all integers larger or equal to one and it is also called the set of natural
nz o
numbers. Q= | (z ∈ Z) ∧ (n ∈ N) . (2.1.12)
n
• the description of the attributes of its elements, e.g.
Example: The set of real numbers. The set of real numbers is de£ned by
M2 = {m | (m ∈ M1 ) ∨ (−m ∈ M1 )} , (2.1.3)
= {1, 2, 3, −1, −2, −3} . R = {. . .} . (2.1.13)

Example: The set of complex numbers. The set of complex numbers is given by
The set M2 includes all elements m with the attribute, that m is an element of the set M 1 , or
© ¡ √ ¢ª
that −m is an element of the set M1 . And in this example these elements are just 1, 2, 3 and C = α + β i | (α, β ∈ R) ∧ i = −1 . (2.1.14)
−1, −2, −3.
1 2
Georg Cantor (1845-1918) The expression "if and only if" is often abbreviated with "iff".

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
8 Chapter 2. Basics on Linear Algebra 2.2. Mappings 9

2.2 Mappings The mappings idV : V → V and idW : W → W are the identity mappings in V, and W, i.e.

2.2.1 De£nition of a Mapping idV (x) = x ∀x ∈ V ; idW (y) = y ∀y ∈ W . (2.2.10)

Let A and B be sets. Then a mapping, or just a map, of A on B is a function f , that assigns every Furthermore f must be surjective, in order to expand the existence of this mapping f −1 from
a ∈ A one unique f (a) ∈ B, f (V) ⊂ W to the whole set W. Then f : V → W is bijective, if and only if the mapping
(
A −→ B g : W → V with g ◦ f = idV and f ◦ g = idW exists. In this case is g = f −1 the inverse.
f: . (2.2.1)
a 7−→ f (a)
2.2.3 De£nition of an Operation
The set A is called the domain of the function f and the set B the range of the function f .
An operation or a combination, symbolized by ¦, over a set M is a mapping, that maps two
arbitrary elements of M onto one element of M.
2.2.2 Injective, Surjective and Bijective (
Let V and W be non empty sets. A mapping f between the two vector spaces V and W assigns M × M −→ M
¦: (2.2.11)
to every x ∈ V a unique y ∈ W, which is also mentioned by f (x) and it is called the range of x (m, n) 7−→ m ¦ n
(under f ). The set V is the domain and W is the range also called the image set of f . The usual
notation of a mapping (represented by the three parts, the rule of assignment f , the domain V 2.2.4 Examples of Operations
and the range W) is given by
( Example: The addition of natural numbers. The addition over the natural numbers N is an
V −→ W operation, because for every m ∈ N and every n ∈ N the sum (m + n) ∈ N is again a natural
f : V −→ W or f : . (2.2.2) number.
x 7−→ f (x)
Example: The subtraction of integers. The subtraction over the integers Z is an operation,
For every mapping f : V → W with the subsets A ⊂ V, and B ⊂ W the following de£nitions because for every a ∈ Z and every b ∈ Z the difference (a − b) ∈ Z is again an integer.
hold Example: The addition of continuous functions. Let Ck be the set of the k-times continuously
differentiable functions. The addition over Ck is an operation, because for every function f (x) ∈
f (A) := {f (x) ∈ W : x ∈ A} the range of A, and (2.2.3) Ck and every function g(x) ∈ Ck the sum (f + g) (x) = (f (x) + g(x)) is again a k–times
f −1 (B) := {x ∈ V : f (x) ∈ B} the range of B. (2.2.4) continuously differentiable function.

With this the following identities hold


2.2.5 Counter-Examples of Operations
f is called surjective, if and only if f (V) = W , (2.2.5) Counter-Example: The subtraction of natural numbers. The subtraction over the natural
f is called injective, iff every f (x) = f (y) implies to x=y , and (2.2.6) numbers N is not an operation, because there exist numbers a ∈ N and b ∈ N with a difference
f is called bijective, iff f is surjective and injective. (2.2.7) (a − b) 6∈ N. E.g. the difference 3 − 7 = −4 6∈ N.
Counter-Example: The scalar multiplication of a n-tuple. The scalar multiplication of a n-
For every injective mapping f : V → W there exists an inverse tuple of real numbers in Rn with a scalar quantity a ∈ R is not an operation, because it does not
( map two elements of Rn onto another element of the same space, but one element of R and one
f (V) −→ V element of Rn .
f −1 : , (2.2.8)
f (x) 7−→ x Counter-Example: The scalar product of two n-tuples. The scalar product of two n-tuples
in Rn is not an operation, because it does not map an element of Rn onto an element Rn , but
and the compositions of f and its inverse are de£ned by onto an element of R.

f −1 ◦ f = idV ; f ◦ f −1 = idW . (2.2.9)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
10 Chapter 2. Basics on Linear Algebra 2.3. Fields 11

2.3 Fields 2.3.2 Examples of Fields


Example: The rational numbers. The set Q of the rational numbers together with the opera-
2.3.1 De£nition of a Field tions addition "+"and multiplication "·"describe a £eld.
A £eld F is de£ned as a set with an operation addition a + b and an operation multiplication ab Example: The real numbers. The set R of the real numbers together with the operations addi-
for all a, b ∈ F. To every pair, a and b, of scalars there corresponds a scalar a + b, called the sum tion "+"and multiplication "·"describe a £eld.
c, in such a way that: Example: The complex numbers. The set C of the complex numbers together with the opera-
1 . Axiom of Fields. The addition is associative , tions addition "+"and multiplication "·"describe a £eld.
a + (b + c) = (a + b) + c ∀a, b, c ∈ F . (F1)
2.3.3 Counter-Examples of Fields
2 . Axiom of Fields. The addition is commutative ,
Counter-Example: The natural numbers. The set N of the natural numbers together with the
a + b = b + a ∀a, b ∈ F . (F2)
operations addition "+"and multiplication "·"do not describe a £eld! One reason for this is that
3 . Axiom of Fields. There exists a unique scalar 0 ∈ F, called zero or the identity element with there exists no inverse w.r.t. the addition in N.
respect to3 the addition of the £eld F, such that the additive identity is given by Counter-Example: The integers. The set Z of the integers together with the operations addi-
a + 0 = a = 0 + a ∀a ∈ F . (F3) tion "+"and multiplication "·"do not describe a £eld! For example there exists no inverse w.r.t.
the multiplication in Z, except for the elements 1 and −1.
4 . Axiom of Fields. To every scalar a ∈ F there corresponds a unique scalar −a, called the
inverse w.r.t. the addition or additive inverse, such that
a + (−a) = 0 ∀a ∈ F . (F4)
To every pair, a and b, of scalars there corresponds a scalar ab, called the product of a and b, in
such way that:
5 . Axiom of Fields. The multiplication is associative ,
a (bc) = (ab) c ∀a, b, c ∈ F . (F5)
6 . Axiom of Fields. The multiplication is commutative ,
ab = ba ∀a, b ∈ F . (F6)
7 . Axiom of Fields. There exists a unique non-zero scalar 1 ∈ F, called one or the identity
element w.r.t. the multiplication of the £eld F, such that the scalar multiplication identity is given
by
a1 = a = 1a ∀a ∈ F . (F7)
8 . Axiom of Fields. To every non-zero scalar a ∈ F there corresponds a unique scalar a −1 or
1
a
, called the inverse w.r.t. the multiplication or the multiplicative inverse, such that
¡ ¢ 1
a a−1 = 1 = a ∀a ∈ F . (F8)
a
9 . Axiom of Fields. The muliplication is distributive w.r.t. the addition, such that the distribu-
tive law is given by
(a + b) c = ac + bc ∀a, b, c ∈ F . (F9)
3
The expression "with respect to" is often abbreviated with "w.r.t.".

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
12 Chapter 2. Basics on Linear Algebra 2.4. Linear Spaces 13

2.4 Linear Spaces 7 . Axiom of Linear Spaces. The scalar muliplication is distributive w.r.t. the vector addition,
such that the distributive law is given by
2.4.1 De£nition of a Linear Space
α (x + y) = αx + αy ∀α ∈ F ; ∀x, y ∈ V . (S7)
Let F be a £eld. A linear space, vector space or linear vector space V over the £eld F is a set,
with an addition de£ned by
8 . Axiom of Linear Spaces. The muliplication by a vector is distributive w.r.t. the scalar
(
V × V −→ V addition, such that the distributive law is given by
+: ∀x, y ∈ V , (2.4.1)
(x, y) 7−→ x + y
(α + β) x = αx + βx ∀α, β ∈ F ; ∀x ∈ V . (S8)
a scalar multiplication given by
( Some simple conclusions are given by
F × V −→ V
·: ∀α ∈ F ; ∀x ∈ V , (2.4.2)
(α, x) 7−→ αx 0 · x = 0 ∀x ∈ V ; 0 ∈ F, (2.4.3)
(−1) x = −x ∀x ∈ V ; − 1 ∈ F, (2.4.4)
and satis£es the following axioms. The elements x, y etc. of the V are called vectors. To every
pair, x and y of vectors in the space V there corresponds a vector x + y, called the sum of x and α · 0 = 0 α ∈ F, (2.4.5)
y, in such a way that:
1 . Axiom of Linear Spaces. The addition is associative , and if

x + (y + z) = (x + y) + z ∀x, y, z ∈ V . (S1) αx = 0 , then α=0 , or x = 0. (2.4.6)


2 . Axiom of Linear Spaces. The addition is commutative ,

x + y = y + x ∀x, y ∈ V . (S2) 2.4.1.0.1 Remarks:

3 . Axiom of Linear Spaces. There exists a unique vector 0 ∈ V, called zero vector or the • Starting with the usual 3-dimensional vector space these axioms describe a generalized
origin of the space V, such that de£nition of a vector space as a set of arbitrary elements x ∈ V. The classic example is
the usual 3-dimensional Euclidean vector space E3 with the vectors x, y.
x + 0 = x = 0 + x ∀x ∈ V . (S3)

4 . Axiom of Linear Spaces. To every vector x ∈ V there corresponds a unique vector −x, • The de£nition says nothing about the character of the elements x ∈ V of the vector space.
called the additive inverse, such that
• The de£nition implies only the existence of an addition of two elements of the V and the
x + (−x) = 0 ∀x ∈ V . (S4) existence of a scalar multiplication, which both do not lead to results out of the vector
space V and that the axioms of vector space (S1)-(S8) hold.
To every pair, α and x, where α is a scalar quantity and x a vector in V, there corresponds a
vector αx, called the product of α and x, in such way that:
• The de£nition only implies that the vector space V is a non empty set, but nothing about
5 . Axiom of Linear Spaces. The multiplication by scalar quantities is associative "how large"it is.
α (βx) = (αβ) x ∀α, β ∈ F ; ∀x ∈ V . (S5)
• F = R, i.e. only vector spaces over the £eld of real numbers R are examined, no look at
6 . Axiom of Linear Spaces. There exists a unique non-zero scalar 1 ∈ F, called identity or the vector spaces over the £eld of complex numbers C.
identity element w.r.t. the scalar multiplication on the space V, such that the scalar multplicative
identity is given by • The dimension dim V of the vector space V should be £nite, i.e. dim V = n for an arbitrary
x1 = x = 1x ∀x ∈ V . (S6) n ∈ N, the set of natural number.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
14 Chapter 2. Basics on Linear Algebra 2.4. Linear Spaces 15

2.4.2 Examples of Linear Spaces 2.4.3 Linear Subspace and Linear Manifold
Example: The space of n-tuples. The space R of the dimension n with the usual addition
n
Let V be a linear space over the £eld F. A subset W ⊆ V is called a linear subspace or a linear
manifold of V, if the set is not empty, W 6= ∅, and the linear combination is again a vector of the
x + y = [x1 + y1 , . . . , xn + yn ] , linear subspace,
ax + by ∈ W ∀x, y ∈ W ; ∀a, b ∈ F . (2.4.10)
and the usual scalar multiplication

αx = [αx1 , . . . , αxn ] , 2.4.4 Linear Combination and Span of a Subspace


Let V be a linear space over the £eld F with the vectors x 1 , x2 , . . . , xm ∈ V. Every vector v ∈ V
is a linear space over the £eld R, denoted by
could be represented by a so called linear combination of the x 1 , x2 , . . . , xm and some scalar
n o quantities a1 , a2 , . . . , am ∈ F
Rn = x | x = (x1 , x2 , . . . , xn )T , ∀x1 , x2 , . . . , xn ∈ R , (2.4.7)
v = a 1 x1 + a2 x2 + . . . + a m xm . (2.4.11)
and with the elements x given by
  Furthermore let M = {x1 , x2 , . . . , xm } be a set of vectors. Than the set of all linear combinations
x1 of the vectors x1 , x2 , . . . , xm is called the span span (M) of the subspace M and is de£ned by
 x2  © ª
∀x1 , x2 , . . . , xn ∈ R.
 
x =  ..  ; span (M) = a1 x1 + a2 x2 + . . . + am xm | a1 , a2 , . . . am ∈ F . (2.4.12)
.
xn
2.4.5 Linear Independence
Example: The space of n × n-matrices. The space of square matrices Rn×n over the £eld R
with the usual matrix addition and the usual multiplication of a matrix with a scalar quantity is a Let V be a linear space over the £eld F. The vectors x 1 , x2 , . . . , xn ∈ V are called linearly
linear space over the £eld R, denoted by independent, if and only if
  n
X
a11 a12 ··· a1n ai xi = 0 =⇒ a1 = a2 = . . . = an = 0. (2.4.13)
 a21 a22 ··· a2n  i=1
 ; ∀aij ∈ R , 1 ≤ i ≤ m, 1 ≤ j ≤ n , and i, j ∈ N.
 
A =  .. .. . .
. . ..
 . . 
In every other case the vectors are called linearly dependent.
am1 am2 · · · amn
(2.4.8)
Example: The £eld. Every £eld F with the de£niton of an addition of scalar quantities in the 2.4.6 A Basis of a Vector Space
£eld and a multiplication of the scalar quantities, i.e. a scalar product, in the £eld is a linear A subset M = {x1 , x2 , . . . , xm } of a linear space or a vector space V over the £eld F is called a
space over the £eld itself. basis of the vector space V, if the vectors x1 , x2 , . . . , xm are linearly independent and the span
Example: The space of continous functions. The space of continuous functions C (a, b) is equals the vector space
given by the open intervall (a, b) or the closed intervall [a, b] and the complex-valued function span (M) = V . (2.4.14)
f (x) de£ned in this intervall,
n
X
C (a, b) = {f (x) | f is complex-valued and continuous in [a, b]} , (2.4.9) x= v i ei , (2.4.15)
i=1
with the addition and scalar multiplication given by

(f + g) = f (x) + g (x) ,
(αf ) = αf (x) .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
16 Chapter 2. Basics on Linear Algebra 2.5. Metric Spaces 17

2.5 Metric Spaces 2.5.2 Examples of Metrices


Example: The distance in the Euclidean space. For two vectors x = (x 1 , x2 )T and y =
2.5.1 De£nition of a Metric (y1 , y2 )T in the 2-dimensional Euclidean space E2 the distance ρ between this two vectors, given
A metric ρ in a linear space V over the £eld F is a mapping describing a "distance"between two by q
neighbouring points for a given set, ρ (x, y) = (x1 − y1 )2 + (x2 − y2 )2 (2.5.2)
( is a metric.
V × V −→ F
ρ: . (2.5.1) Example: Discrete metric. The mapping, called the discrete metric,
(x, y) 7−→ ρ (x, y)
(
0, if x = y
The metric satis£es the following relations for all vectors x, y, z ∈ V: ρ (x, y) = , (2.5.3)
1, else
1 . Axiom of Metrices. The metric is positive,
is a metric in every linear space.
ρ (x, y) ≥ 0 ∀x, y ∈ V . (M1) Example: The metric.
ρ (x, y) = xT Ay. (2.5.4)
2 . Axiom of Metrices. The metric is de£nite,
Example: The metric tensor.
ρ (x, y) = 0 ⇐⇒ x = y ∀x, y ∈ V . (M2)
2.5.3 De£nition of a Metric Space
3 . Axiom of Metrices. The metric is symmetric,
A vector space V with a metric ρ is called a metric space.
ρ (x, y) = ρ (y, x) ∀x, y ∈ V . (M3)
2.5.4 Examples of a Metric Space
4 . Axiom of Metrices. The metric satis£es the triangle inequality,
Example: The £eld. The £eld of the complex numbers C is a metric space.
ρ (x, z) ≤ ρ (x, y) + ρ (y, z) ∀x, y, z ∈ V . (M4) Example: The vector space. The vector space Rn is a metric space, too.

ρ (x, y) ρ (y, z)

x
ρ (x, z)
z

Figure 2.1: Triangle inequality.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
18 Chapter 2. Basics on Linear Algebra 2.6. Normed Spaces 19

2.6 Normed Spaces and £nally the triangle inequality,

2.6.1 De£nition of a Norm kx + yk ≤ kxk + kyk . (2.6.6)


A norm k·k in a linear sapce V over the £eld F is a mapping A vector norm is given in the most general case by
(
V −→ F v
u n
k·k : . (2.6.1) uX
x 7−→ kxk kxkp = t p
|xi |p . (2.6.7)
i=1
The norm satis£es the following relations for all vectors x, y, z ∈ V and every α ∈ F:
1 . Axiom of Norms. The norm is positive, Example: The normed vector space. For the linear vector space Rn , with the zero vector 0,
there exists a large variety of norms, e.g. the l-in£nity-norm, maximum-norm,
kxk ≥ 0 ∀x ∈ V . (N1)
kxk∞ = max |xi | , with 1 ≤ i ≤ n, (2.6.8)
2 . Axiom of Norms. The norm is de£nite,

kxk = 0 ⇐⇒ x = 0 ∀x ∈ V . (N2) the l1-norm,


n
3 . Axiom of Norms. The norm is homogeneous, X
kxk1 = |xi | , (2.6.9)
kαxk = |α| kxk ∀α ∈ F ; ∀x ∈ V . (N3) i=1

4 . Axiom of Norms. The norm satis£es the triangle inequality, the L1-norm,
Z
kx + yk ≤ kxk + kyk ∀x, y ∈ V . (N4)
kxk = |x| dΩ, (2.6.10)
Some simple conclusions are given by Ω

k−xk = kxk , (2.6.2) the l2-norm, Euclidian norm,


kxk − kyk ≤ kx − yk . (2.6.3) v
u n
uX
kxk2 = t |xi |2 , (2.6.11)
2.6.2 De£nition of a Normed Space i=1

A linear space V with a norm k·k is called a normed space.


the L2-norm,
2.6.3 Examples of Vector Norms and Normed Vector Spaces vZ
u
kxk = t |x|2 dΩ,
u
The norm of a vector x is written like kxk and is called the vector norm. For a vector norm the (2.6.12)
following conditions hold, see also (N1)-(N4), Ω

kxk > 0 , with x 6= 0, (2.6.4) and the p-norm,

with a scalar quantity α, Ã n


! p1
X p
kxk = |xi | , with 1 ≤ p < ∞. (2.6.13)
kαxk = |α| kxk , and ∀α ∈ R , (2.6.5) i=1

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
20 Chapter 2. Basics on Linear Algebra 2.6. Normed Spaces 21

The maximum-norm is developed by determining the limit, y 6


η
z := max |xi | , with i = 1, . . . , n,
Xn
zp ≤ |xi |p ≤ nz p ,
i=1

and £nally the maximum-norm is de£ned by


-
à ! p1 ξ x
n
X √
z≤ |xi |p ≤ p
nz. (2.6.14)
i=1 Figure 2.2: Hölder sum inequality.
Example: Simple Example with Numbers. The varoius norms of a vector x differ in most
general cases. For example with the vector xT = [−1, 3, −4]:
The result is the so called Hölder sum inequality,
kxk1 = 8,
√ Ã ! p1 Ã ! 1q
kxk2 = 26 ≈ 5, 1, X X p
X q
|xj yj | ≤ |xj | |yj | . (2.6.19)
kxk∞ = 4.
j j j

2.6.4 Hölder Sum Inequality and Cauchy’s Inequality For the special case with p = q = 2 the Hölder sum inequality, see equation (2.6.19) transforms
into the Cauchy’s inequality,
Let p and q be two scalar quantities, and the relationship between them is de£ned by
à ! 12 à ! 12
1 1 X X 2
X 2
+ = 1 , with p > 1, q > 1. (2.6.15) |xj yj | ≤ |xj | |yj | . (2.6.20)
p q j j j

In the £rst quadrant of a coordinate system the graph y = x p−1 and the straight lines x = ξ, and
y = η with ξ > 0, and η > 0 are displayed. The area enclosed by this two straight lines, the 2.6.5 Matrix Norms
curve and the axis of the coordinate system is at least the area of the rectangle given by ξη,
In the same way like the vector norm the norm of a matrix A is introduced. This matrix norm
ξ p ηq is written kAk. The characterictics of the matrix norm are given below, and start with the zero
ξη ≤ + . (2.6.16)
p q matrix 0, and the condition A 6= 0,

For the real or complex quantities xj , and yj , which are not all equal to zero, the ξ, and η could kAk > 0, (2.6.21)
be described by
|xj | |yj |
ξ=³ ´ p1 , and η = ³P ´1 . (2.6.17) and with an arbitrary scalar quantity α,
P p q q
j |xj | j |yj |
kαAk = |α| kAk , (2.6.22)
Inserting the relations of equations (2.6.17) in (2.6.16), and summing the terms with the index j,
kA + Bk ≤ kAk kBk , (2.6.23)
implies
P P P kA Bk ≤ kAk kBk . (2.6.24)
p q
j |xj | |yj | j |xj | j |yj |
³P ´ p1 ³P ´ 1q ≤ ³P ´ + ³P ´ = 1. (2.6.18) In addition for the matrix norms and in opposite to vector norms the last axiom hold. If this
p
|x j | p
|y j | q p j |xj | q j |yj |q condition holds, then the norm is called to be multiplicative. Some usual norms, which satisfy
j j

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
22 Chapter 2. Basics on Linear Algebra 2.6. Normed Spaces 23

the conditions (2.6.21)-(2.6.22) are given below. With n being the number of rows of the matrix Vector norms Compatible matrix norm Description
A, the absolute norm is given by kxk = max |xi | kAkM = M (A) absolute norm
kAkR = R (A) = sup (A) maximum absolute row sum norm
kAkM = M (A) = n max |aik | . (2.6.25)
P
The maximum absolute row sum norm is given by kxk = |xi | kAkM = M (A) absolute norm
kAkC = C (A) = sup (A) maximum absolute column sum norm
n
X
kAkR = R (A) = max |aik | . (2.6.26) qP
i
k=1 kxk = |xi |2 kAkM = M (A) absolute norm
kAkN = N (A) Euclidean norm
The maximum absolute column sum norm is given by
kAkH = H (A) = sup (A) spectral norm
n
X
kAkC = C (A) = max |aik | . (2.6.27) Table 2.1: Compatibility of norms.
k
i=1

The Euclidean norm is given by


q¡ De£nition 2.2. The supremum sup (x) of a matrix A associated to the vector norm kxk is de£ned
kAkN = N (A) = T
¢
tr A A . (2.6.28) by the scalar quantity α, in such a way that,

The spectral norm is given by kAxk ≤ α kxk , (2.6.32)


q ¡ ¢ for all vectors x,
kAkH = H (A) = largest eigenvalue of AT A . (2.6.29)
sup (A) = minαi , (2.6.33)
x
2.6.6 Compatibility of Vector and Matrix Norms
De£nition 2.1. A matrix norm kAk is called to be compatible to an unique vector norm kxk, iff or
for all matrices A and all vectors x the following inequality holds, kA xk
sup (A) = max . (2.6.34)
kA xk ≤ kAk kxk . (2.6.30) kxk
In table (2.1) above, all associated supremums are denoted.
The norm of the transformed vector y = A x should be separated by the matrix norm associated
to the vector norm from the vector norm kxk of the starting vector x. In table (2.1) the most 2.6.8 Linear Dependence and Independence
common vector norms are compared with their compatible matrix norms.
The vectors a1 , a2 , . . . , ai , . . . , an ∈ Rn are called to be linearly independent, iff there exists
scalar quantites α1 , α2 , . . . , αi , . . . , αn ∈ R, which are not all equal to zero, such that
2.6.7 Vector and Matrix Norms in Eigenvalue Problems
n
X
The eigenvalue problem A x = λx could be rewritten with the compatbility condition, kA xk ≤ αi ai = 0. (2.6.35)
kAk kxk, like this i=1
kA xk = |λ| kxk ≤ kAk kxk . (2.6.31)
In every other case the vectors are called to be linearly dependent. For example three linearly
This equations implies immediately, that the matrix norm is an estimation of the eigenvalues. independent vectors are given by
Then with this condition a compatible matrix norm associated to a vector norm is most valuable,
     
if in the inequality kA xk ≤ kAk kxk, see also (2.6.31), both sides are equal. In this case there 1 0 0
can not exist a value of the left-hand side, which is less than the value of the right-hand side. α 0 + α 1 + α 0 6= 0 , with ∀αi =
1  2  3 
6 0. (2.6.36)
This upper limit is called the supremum and is written like sup (A). 0 0 1

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
24 Chapter 2. Basics on Linear Algebra 2.7. Inner Product Spaces 25

The n linearly independent vectors ai with i = 1, . . . , n span a n-dimensional vector space. This 2.7 Inner Product Spaces
set of n linearly independent vectors could be used as a basis of this vector space, in order to
describe another vector an+1 in this space, 2.7.1 De£nition of a Scalar Product
n
X Let V be a linear space over the £eld of real numbers R. A scalar product 4 or inner product is a
an+1 = β k ak , and an+1 ∈ Rn . (2.6.37) mapping
k=1
(
V × V −→ R
h, i: . (2.7.1)
(x, y) 7−→ hx, yi
The scalar product satis£es the following relations for all vectors x, y, z ∈ V and all scalar
quantities α, β ∈ R:
1 . Axiom of Inner Products. The scalar product is bilinear,

hαx + βy, zi = αhx, zi + βhy, zi ∀α, β ∈ R ; ∀x, y ∈ V . (I1)

2 . Axiom of Inner Products. The scalar product is symmetric,

hx, yi = hy, xi ∀x, y ∈ V . (I2)

3 . Axiom of Inner Products. The scalar product is positive de£nite,

hx, xi ≥ 0 ∀x ∈ V , and (I3)


hx, xi = 0 ⇐⇒ x = 0 ∀x ∈ V , (I4)

and for two varying vectors,



x = 0 , and an arbitrary vector y ∈ V ,

hx, yi = 0 ⇐⇒ y = 0 , and an arbitrary vector x ∈ V , (2.7.2)

x⊥y , i.e. the vectors x and y ∈ V are orthogonal.

Theorem 2.1. The inner product induces a norm and with this a metric, too. The scalar product
1
kxk = hx, xi 2 de£nes a scalar-valued function, which satis£es the axioms of a norm!

2.7.2 Examples of Scalar Products


Example: The usual scalar product in R2 . Let x = (x1 , x2 )T ∈ R2 and y = (y1 , y2 )T ∈ R2
be two vectors, then the mapping

hx, yi = x1 y1 + x2 y2 (2.7.3)

is called the usual scalar product.


4
It is important to notice, that the scalar product and the scalar multiplication are complete different mappings!

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
26 Chapter 2. Basics on Linear Algebra 2.7. Inner Product Spaces 27

2.7.3 De£nition of an Inner Product Space 2.7.5 Unitary Space


A vector space V with a scalar product h , i is called an inner product space or Euclidean vector A vector space V over the £eld of real numbers R, with a scalar product h , i is called an inner
space 5 . The axioms (N1), (N2), and (N3) hold, too, only the axiom (N4) has to be proved. The product space, and sometimes its complex analogue is called an unitary space over the £eld of
axiom (N4) also called the Schwarz inequality is given by complex numbers C.
hx, yi ≤ kxk kyk ,
this implies the triangel inequality,
kx + yk2 = hx + y, x + yi = hx + y, xi + hx + y, yi ≤ kx + yk · kxk + kx + yk · kyk ,
and £nally results, the unitary space is a normed space,
kx + yk ≤ kxk + kyk .
And £nally the relations between the different subspaces of a linear vector space are described
by the following scheme,
Vinner product space → →
8 Vnormed space 8 Vmetric space ,
where the arrow → describes the necessary conditions, and the arrow 8 describes the not nec-
essary, but possible conditions. Every true proposition in a metric space will be true in a normed
space or in an inner product space, too. And a true proposition in a normed space is also true in
an inner product space, but not necessary vice versa!

2.7.4 Examples of Inner Product Spaces


Example: The scalar product in a linear vector space. The 3-dimensional linear vector space
R3 with the ususal scalar product de£nes a inner product by
hu, vi = u · v = α = |u| |v| cos (]u, v) , (2.7.4)
is an inner product space.
Example: The inner product in a linear vector space. The Rn with an inner product and the
bilinear form
hu, vi = uT Av, (2.7.5)
and with the quadratic form
hu, ui = uT Au, (2.7.6)
and in the special case A = 1 with the scalar product
hu, ui = uT u, (2.7.7)
is an inner product space.
5
In mathematic literature often the restriction is mentioned, that the Euclidean vector space should be of £nite
dimension. Here no more attention is paid to this restriction, because in most cases £nite dimensional spaces are
used.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
28 Chapter 2. Basics on Linear Algebra 2.8. Af£ne Vector Space and the Euclidean Vector Space 29
−→
2.8 Af£ne Vector Space and the Euclidean Vector Space assigns to every pair of points P and Q ∈ W ⊂ Rnaf£ne a vector P Q ∈ V. And the mapping also
satis£es the following conditions:
2.8.1 De£nition of an Af£ne Vector Space
• For every constant P the assignment
In matrix calculus an n-tuple a ∈ Rn over the £eld of real numbers R is studied, i.e.
−→
ΠP : Q −→ P Q, (2.8.4)
ai ∈ R , and i = 1, . . . , n. (2.8.1)
is a bijective mapping, i.e. the inverse Π−1
P exists.
One of this n-tuple, represented by a column matrix, or also called a column vector or just vector,
could describe an af£ne vector, if an point of origin in a geometric sense and a displacement of • Every P , Q and R ∈ W satisfy
origin are established. A set W is called an af£ne vector space over the vector space V ⊂ R n , if
−→ −→ −→
P Q + QR = P R, (2.8.5)
6 V ⊂ R2
~ − P~ )
(R
and
± −→
ΠP : W −→ V , with ΠP Q = P Q , and Q ∈ W. (2.8.6)

I −→ For all P , Q and R ∈ W ⊂ Rnaf£ne the axioms of a linear space (S1)-(S4) for the addition hold
~c = P R ~ − P~ )
(Q
1
−→ a + b = c −→ ai + bi = ci , with i = 1, . . . , n, (2.8.7)
~b = −→ ~a = P Q
QR -
and (S5)-(S8) for the scalar multiplication
P~
∗ ∗
αa = a −→ αai = ai . (2.8.8)
Figure 2.3: Vector space R . 2

And a vector space is a normed space, like shown in section (2.6).

R W ⊂ R2af£ne 2.8.2 The Euclidean Vector Space


6 ±I
An Euclidean vector space En is an unitary vector space or an inner prodcut space. In addition
~b to the normed spaces there is an inner product de£ned in an Euclidean vector space. The inner
~c product assigns to every pair of vectors u and v a scalar quantity α,
1
Q hu, vi ≡ u · v = v · u = α , with u, v ∈ En , and α∈R. (2.8.9)
~a
P For example in the 2-dimensional Euclidean vector space the angle ϕ between the vectors u and
- v is given by
u·v
u · v = |u| · |v| cos ϕ , and cos ϕ = . (2.8.10)
|u| · |v|
Figure 2.4: Af£ne vector space R 2af f ine . The following identities hold:

• Two normed space V and W over the same £eld are isomorphic, if and only if there exists
a mapping given by a linear mapping f from V to W, such that the following inequality holds for two constants
m and M in every point x ∈ W,
W × W −→ V , (2.8.2)
Rnaf£ne −→ Rn , (2.8.3) m · kxk ≤ kf (x)k ≤ M · kxk . (2.8.11)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
30 Chapter 2. Basics on Linear Algebra 2.8. Af£ne Vector Space and the Euclidean Vector Space 31

Á • The set of all linear comnbinations of vectors v1 , v2 , . . . , vn span a subspace. The dimen-
v sion of this subspace is equal to the number of vectors n, which span the largest linearly
independent space. The dimension of this subspace is at most n.
ϕ 1
• Every n + 1 vectors of the Euclidean vector space v ∈ En with the dimension n must be
u linearly dependent, i.e. the vector v = vn+1 could be described by a linear combination of
O the vectors v1 , v2 , . . . , vn ,

λv + a1 v1 + a2 v2 + . . . + an vn = 0, (2.8.15)
Figure 2.5: The scalar product in an 2-dimensional Euclidean vector space. 1¡ ¢
v = − a1 v 1 + a 2 v 2 + . . . + a n v n . (2.8.16)
λ

• Every two real n-dimensional normed spaces are isomorphic. For example two subspaces • The vectors zi given by
of the vector space Rn . 1
z i = − ai v i , with i = 1, . . . , n, (2.8.17)
Bellow in most cases the Euclidean norm with p = 2 is used to describe the relationships between λ
the elements of the af£ne (normed) vector space x ∈ R naf£ne and the elements of the Euclidean are called the components of the vector v in the Euclidean vector space E n .
vector space v ∈ En . With this condition the relations between a norm, like in section (2.6) and
an inner product is given by • Every n linearly independent vectors vi of dimension n in the Euclidean vector space En
are called to be a basis of the Euclidean vector space En . The vectors gi = vi are called
kxk2 = x · x, (2.8.12) the base vectors of the Euclidean vector space En ,
n
and X ai
v = v 1 g1 + v 2 g2 + . . . + v n gn = v i gi , with vi = − . (2.8.18)
q
i=1
λ
kxk = kxk2 = x2i . (2.8.13)
The v i gi are called to be the components and the v i are called to be the coordinates of the
In this case it is possible to de£ne a bijective mapping between the n-dimensional af£ne vector
vector v w.r.t. the basis gi . Sometimes the scalar quantities v i are called the components
space and the Euclidean vector space. This bijectivity is called the topology a homeomorphism,
of the vector v w.r.t. to the basis gi , too.
and the spaces are called to be homeomorphic. If two spaces are homeomorphic, then in both
spaces the same axioms hold.

2.8.3 Linear Independence, and a Basis of the Euclidean Vector Space


The conditions for the linear dependence and the linear independence of vectors v i in the n-
dimensional Euclidean vector space En are given below. Furthermore a a vector basis of the
Euclidean vector space En is introduced, and the representation of an arbitrary vector with this
basis is described.
• The set of vectors v1 , v2 , . . . , vn is linearly dependent, if there exists a number of scalar
quantities a1 , a2 , . . . , an , not all equal to zero, such that the following condition holds,

a1 v1 + a2 v2 + . . . + an vn = 0. (2.8.14)

In every other case is the set of vectors v1 , v2 , . . . , vn called to be linearly independent.


The left-hand side is called the linear combination of the vectors v 1 , v2 , . . . , vn .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
32 Chapter 2. Basics on Linear Algebra 2.9. Linear Mappings and the Vector Space of Linear Mappings 33

2.9 Linear Mappings and the Vector Space of Linear Map- mappings f1 : V → W and f2 : V → W should be a linear mapping (f1 + f2 ) : V → W, too.
For an arbitrary vector x ∈ V the pointwise addition is given by
pings
(f1 + f2 ) (x) := f1 (x) + f2 (x) ∀x ∈ V , (L3)
2.9.1 De£nition of a Linear Mapping
for all linear mappings f1 , f2 from V to W. The sum f1 + f2 is linear, because both mappings f1
Let V and W be two vector spaces over the £eld F. A mapping f : V → W from elements of and f2 are linear, i.e. (f1 + f2 ) is a linear mapping, too.
the vector space V to the elements of the vector space W is linear and called a linear mapping,
if for all x, y ∈ V and for all α ∈ R the following axioms hold: 4 . Axiom of Linear Mappings (De£nition of the scalar multiplication of linear mappings).
Furthermore a product of a scalar quantity αinR and a linear mapping f : V → W is de£ned
1 . Axiom of Linear Mappings (Additive w.r.t. the vector addition). The mapping f is by
additive w.r.t. the vector addition, (αf ) (x) := αf (x) ∀α ∈ R ; ∀x ∈ V . (L4)
f (x + y) = f (x) + f (y) ∀x, y ∈ V . (L1) If the mapping f is linear, then results immediatly, that the mapping (αf ) is linear, too.
5 . Axiom of Linear Mappings (Satisfaction of the axioms of a linear vector space). The
2 . Axiom of Linear Mappings (Homogeneity of linear mappings). The mapping f is homo- de£nitions (L3) and (L4) satisfy all linear vector space axioms given by (S1)-(S8). This is easy
geneous w.r.t. scalar multiplication, to prove by computing the equations (S1)-(S8). If V and W are two vector spaces over the £eld
F, then the set L of all linear mappings f : V → W from V to W,
f (αx) = αf (x) ∀α ∈ F ; ∀x ∈ V . (L2)
L (V, W) is a linear vector space. (L5)
2.9.1.0.2 Remarks:
The identity element w.r.t the addition of a vector space L (V, W) is the null mapping 0, which
• The linearity of the mapping f : V → W results of being additive (L1), and homogeneous sends every element from V to the zero vector 0 ∈ W.
(L2).

• Because the action of the mapping f is only de£ned on elements of the vector space V, it
2.9.3 The Basis of the Vector Space of Linear Mappings
is necessary that, the sum vector x + y ∈ V (for every x, y ∈ V) and the scalar multiplied
vector αx ∈ V (for every αf ∈ R) are elements of the vector space V, too. And with this
postulation the set V must be a vector space!

• With the same arguments for the ranges f (x), f (y), and f (x + y), also for the ranges
f (αx), and αf (x) in W the set W must be a vector space!

• A linear mapping f : V → W is also called a linear transformation, a linear operator or a


homomorphism.

2.9.2 The Vector Space of Linear Mappings


In the section before the linear mappings f : V → W, which sends elements of V to elements of
W were introduced. Because it is so nice to work with vector spaces, it is interesting to check,
if the linear mappings f : V → W form a vector space, too? In order to answer this question it
is necessary to check the de£nitions and axioms of a linear vector space (S1)-(S8). If they hold,
then the set of linear mappings is a vector space:
3 . Axiom of Linear Mappings (De£nition of the addition of linear mappings). In the de£ni-
tion of a vector space the existence of an addition "+"is claimed, such that the sum of two linear

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
34 Chapter 2. Basics on Linear Algebra 2.9. Linear Mappings and the Vector Space of Linear Mappings 35

2.9.4 De£nition of a Composition of Linear Mappings 2.9.6 The Representation of a Linear Mapping by a Matrix
Till now only an addition of linear mappings and a multiplication with a scalar quantity are Let x and y be two arbitrary elements of the linear vector space V given by
de£ned. The next step is to de£ne a "multiplication"of two linear mappings, this combination of n n
two functions to form a new single function is called a composition. Let f 1 : V → W be a linear X X
x= xi ei , and y= y i ei . (2.9.7)
mapping and furthermore let f2 : X → Y be linear, too. If the image set W of the linear mapping i=1 i=1
f1 also the domain of the linear mapping f2 , i.e. W = X, then the composition f1 ◦ f2 : V → Y
is de£ned by Let L be a linear mapping from V in itself
(f1 ◦ f2 ) (x) = f1 (f2 (x)) ∀x ∈ V . (2.9.1)
L = αij ϕij . (2.9.8)
Because of the linearity of the mappings f1 and f2 the composition f1 ◦ f2 is also linear.

2.9.4.0.3 Remarks:
• The composition f1 ◦ f2 is also written as f1 f2 and it is sometimes called the product of f1 y = L (x) , (2.9.9)
¡ ¢¡ ¢
and f2 . y i ei = αkl ϕkl xj ej
¡ ¢
• If this products exist (, i.e. the domains and image sets of the linear mappings match like = αkl ϕkl xj ej
in the de£nition), then the following identities hold: = αkl xj ϕkl (ej )
f1 (f2 f3 ) = (f1 f2 ) f3 (2.9.2) y=. (2.9.10)
f1 (f2 + f3 ) = f1 f2 + f1 f3 (2.9.3)
(f1 f2 ) f3 = f1 f3 + f2 f3 (2.9.4) 2.9.7 The Isomorphism of Vector Spaces
α (f1 f2 ) = α (f1 f2 ) = f1 (αf2 ) (2.9.5) The term "bijectivity"and the attributes of a bijective linear mapping f : V → W imply the
following de£ntion. A bijective linear mapping f : V → W is also called an isomorphism of the
• If all sets are equal V = W = X = Y, then this products exist, i.e. all the linear mappings vector spaces V and W). The spaces V and W are said to be isomorphic.
map the vector space V onto itself n-tuple
f ∈ L (V, V) =: L (V) . (2.9.6)
   
In this case with f1 ∈ L (V, V), and f2 ∈ L (V, V) the composition f1 ◦ f2 ∈ L (V, V) is a x1 x1
linear mapping from the vector space V to itself, too. i  ..   .. 
x = x ei ←→  .  ←→ x =  .  , (2.9.11)
xn xn
2.9.5 The Attributes of a Linear Mapping with x∈V dim V = n , the space of all n-tuples, x ∈ Rn .
• Let V and W be vector spaces over the F and L (V, W) the vector space of all linear map-
pings f : V → W. Because L (V, W) is a vector space, the addition and the multiplication
with a scalar for all elements of L, i.e. all linear mappings f : V → W, is again a linear
mapping from V to W.
• An arbitrary composition of linear mappings, if it exists, is again a linear mapping from
one vector space to another vector space. If the mappings f : V → W form a space in itself
exist, then every composition of this mappings exist and is again linear, i.e. the mapping
is again an element of L (V, V).
• The existence of an inverse, i.e. a reverse linear mapping from W to V, and denoted by
f −1 : W → V, is discussed in the following section.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
36 Chapter 2. Basics on Linear Algebra

2.10 Linear Forms and Dual Vector Spaces


2.10.1 De£nition of Linear Forms and Dual Vector Spaces
Let W ⊂ Rn be the vector space of column vectors x. In this vector space the scalar product
h , i is de£ned in the usual way, i.e.
n
X
Chapter 3
h , i : R × R → R and
n n
hx, yi = x i yi . (2.10.1)
i=1

The relations between the continuous linear functionals f : R n → R and the scalar products h , i
Matrix Calculus
de£ned in the R n are given by the Riesz representation theorem, i.e.
Theorem 2.2 (Riesz representation theorem). Every continuous linear functional f : R n → R
could be represented by For example G ILBERT [5], and K RAUS [10]. And in german S TEIN ET AL . [13], and Z URMÜHL
f (x) = hx, ui ∀x ∈ Rn , (2.10.2) [14].
and the vector u is uniquely de£ned by f (x).

2.10.2 A Basis of the Dual Vector Space

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 37


38 Chapter 3. Matrix Calculus Chapter Table of Contents 39

3.4.5 Characteristics of the Congruence Transformation . . . . . . . . . . . 57


Chapter Table of Contents 3.4.6 Orthogonal Transformation . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.7 The Gauss Transformation . . . . . . . . . . . . . . . . . . . . . . . . 59
3.1 De£nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.1.1 Rectangular Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5.1 Representations and Characteristics . . . . . . . . . . . . . . . . . . . 62
3.1.2 Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5.2 Congruence Transformation of a Matrix . . . . . . . . . . . . . . . . . 62
3.1.3 Column Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5.3 Derivatives of a Quadratic Form . . . . . . . . . . . . . . . . . . . . . 63
3.1.4 Row Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.6 Matrix Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.1.5 Diagonal Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6.1 The Special Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . 65
3.1.6 Identity Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6.2 Rayleigh Quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.1.7 Transpose of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6.3 The General Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . 69
3.1.8 Symmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6.4 Similarity Transformation . . . . . . . . . . . . . . . . . . . . . . . . 69
3.1.9 Antisymmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6.5 Transformation into a Diagonal Matrix . . . . . . . . . . . . . . . . . 70
3.2 Some Basic Identities of Matrix Calculus . . . . . . . . . . . . . . . . . . . 42
3.6.6 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . . . 71
3.2.1 Addition of Same Order Matrices . . . . . . . . . . . . . . . . . . . . 42
3.6.7 Proof of the Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . 71
3.2.2 Multiplication by a Scalar Quantity . . . . . . . . . . . . . . . . . . . 42
3.2.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2.4 The Trace of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.5 Symmetric and Antisymmetric Square Matrices . . . . . . . . . . . . . 44
3.2.6 Transpose of a Matrix Product . . . . . . . . . . . . . . . . . . . . . . 44
3.2.7 Multiplication with the Identity Matrix . . . . . . . . . . . . . . . . . 45
3.2.8 Multiplication with a Diagonal Matrix . . . . . . . . . . . . . . . . . . 45
3.2.9 Exchanging Columns and Rows of a Matrix . . . . . . . . . . . . . . . 46
3.2.10 Volumetric and Deviator Part of a Matrix . . . . . . . . . . . . . . . . 46
3.3 Inverse of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3.1 De£nition of the Inverse . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3.2 Important Identities of Determinants . . . . . . . . . . . . . . . . . . . 48
3.3.3 Derivation of the Elements of the Inverse of a Matrix . . . . . . . . . . 49
3.3.4 Computing the Elements of the Inverse with Determinants . . . . . . . 50
3.3.5 Inversions of Matrix Products . . . . . . . . . . . . . . . . . . . . . . 52
3.4 Linear Mappings of an Af£ne Vector Spaces . . . . . . . . . . . . . . . . . 54
3.4.1 Matrix Multiplication as a Linear Mapping of Vectors . . . . . . . . . 54
3.4.2 Similarity Transformation of Vectors . . . . . . . . . . . . . . . . . . 55
3.4.3 Characteristics of the Similarity Transformation . . . . . . . . . . . . . 55
3.4.4 Congruence Transformation of Vectors . . . . . . . . . . . . . . . . . 56

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
40 Chapter 3. Matrix Calculus 3.1. De£nitions 41

3.1 De£nitions 3.1.5 Diagonal Matrix


The elements of a diagonal matrix are all zero except the ones, where the column index equals
A matrix is an array of m × n numbers
the row index,
  D = [Dik ] , and Dik = 0 , iff i 6= k. (3.1.5)
A11 A12 · · · A1n
 A21 A22
 · · · A2n 
 Sometimes a diagonal matrix is written like this, because there are only elements on the main
A = [Aik ] =  .. .. ... ..  . (3.1.1) diagonal of the matrix
 . . . 
Am1 · · · · · · Amn D = d D11 · · · Dmm c. (3.1.6)

The index i is the row index and k is the column index. This matrix is called a m × n-matrix. 3.1.6 Identity Matrix
The order of a matrix is given by the number of rows and columns.
The identity matrix is a diagonal matrix given by
 
1 0 ··· 0
3.1.1 Rectangular Matrix 0 1 · · · 0 
(
  1ik = 0 , iff i 6= k
1 =  .. .. . . ..  = . (3.1.7)
Something like in equation (3.1.1) is called a rectangular matrix. . . . . 1ik = 1 , iff i=k
0 0 ··· 1

3.1.2 Square Matrix 3.1.7 Transpose of a Matrix


A matrix is said to be square, if the number of rows equals the number of columns. It is a The matrix transpose is the matrix obtained by exchanging the columns and rows of the matrix
n × n-matrix  
A11 A12 · · · A1n A = [aik ] , and AT = [aki ] . (3.1.8)
 A21 A22 · · · A2n 
 
A = [Aik ] =  .. .. ... ..  . (3.1.2)
 . . .  3.1.8 Symmetric Matrix
An1 · · · · · · Ann
A square matrix is called to be symmetric, if the following equation is satis£ed

3.1.3 Column Matrix AT = A. (3.1.9)

A m × 1-matrix is called a column matrix or a column vector a given by It is a kind of re¤ection at the main diagonal.
 
a1 3.1.9 Antisymmetric Matrix
 a2  £ ¤T
 
a =  ..  = a1 a2 · · · am . (3.1.3) A square matrix is called to be antisymmetric, if the following equation is satis£ed
 . 
am AT = −A. (3.1.10)

For the elements of an antisymmetric matrix the following conditions hold


3.1.4 Row Matrix
aik = −aki . (3.1.11)
A 1 × n-matrix is called a row matrix or a row vector a given by
£ ¤ For that reason a antisymmetric matrix must have zeros on its diagonal.
a = a1 a 2 · · · a n . (3.1.4)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
42 Chapter 3. Matrix Calculus 3.2. Some Basic Identities of Matrix Calculus 43

3.2 Some Basic Identities of Matrix Calculus


B (m×n)
?
3.2.1 Addition of Same Order Matrices
-
cij
The matrices A, B and C are commutative under matrix addition
A(l×m) C (l×n)
A + B = B + A = C. (3.2.1)

And the same connection given in components notation

Aik + Bik = Cik . (3.2.2)


Figure 3.1: Matrix multiplication.
The matrices A, B and C are associative under matrix addition

(A + B) + C = A + (B + C) . (3.2.3) But in general matrix multiplication is in general not commutative


And there exists an identity element w.r.t. matrix addition 0, called the additive identity, and
de£ned by A + 0 = A. Furthermore there exists an inverse element w.r.t. matrix addition −A, A B 6= B A. (3.2.9)
called the additive inverse, and de£ned by A + X = 0 → X = −A.
There is an exception, the so called commutative matrices, which are diagonal matrices of the
same order.
3.2.2 Multiplication by a Scalar Quantity
The scalar multiplication of matrices is given by 3.2.4 The Trace of a Matrix
 
αA11 αA12 · · · αA1n The trace of a matrix is de£ned as the sum of the diagonal elements,
 αA21 αA22 · · · αA2n 
; α ∈ R.
 
αA = Aα =  .. .. ... ..  (3.2.4) n
X
 . . . 
tr A = tr [Aik ](m×n) = Aii . (3.2.10)
αAm1 · · · · · · αAmn
i=1

3.2.3 Matrix Multiplication It is possible to split the trace of a sum of matrices

The product of two matrices A and B is de£ned by the matrix multiplication tr (A + B) = tr A + tr B. (3.2.11)
A(l×m) B (m×n) = C (l×n) (3.2.5) Computing the trace of a matrix product is commutative,
Xm
Cik = Aiν Bνk . (3.2.6)
tr (A B) = tr (B A) , (3.2.12)
ν=1

It is important to notice the condition, that the number of columns of the £rst matrix equals the but still the matrix multiplication in general is not commutative, see equation (3.2.9),
number of rows of the second matrix, see index m in equation (3.2.5). Matrix multiplication is
associative A B 6= B A. (3.2.13)
(A B) C = A (B C) , (3.2.7)
The trace of an identity matrix of dimension n is de£ned by,
and also matrix multiplication is distributive

(A + B) C = A C + B C. (3.2.8) tr 1(n×n) = n. (3.2.14)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
44 Chapter 3. Matrix Calculus 3.2. Some Basic Identities of Matrix Calculus 45

3.2.5 Symmetric and Antisymmetric Square Matrices The transpose of the matrix C is given by
Every matrix M could be described as a sum of a symmetric part S and an antisymmetric part A m
X m
X
C T = [Cki ] , and Cki = Akν Bνi = Biν Aνk ,
M (n×n) = S (n×n) + A(n×n) . (3.2.15) ν=1 ν=1

The symmetric part is de£ned like in equation (3.1.9), and £nally in symbol notation
T
S=S , i.e. S ik = S ki . (3.2.16)
C T = (A B)T = B T AT .
The antisymmetric part is de£ned like in equation (3.1.10),

A = −AT , i.e. Aik = −Aki , and Aii = 0. (3.2.17) 3.2.7 Multiplication with the Identity Matrix
For example an antisymmetric matrix looks like this The identity matrix is the multiplicative identity w.r.t. the matrix multiplication
 
0 1 5 A 1 = 1 A = A. (3.2.24)
A = −1 0 −2

−5 2 0.
3.2.8 Multiplication with a Diagonal Matrix
The symmetric and antisymmetric part of a square matrix are de£ned by A diagonal matrix D is given by
1¡ ¢ 1¡ ¢  
M= M + MT + M − M T = S + A. (3.2.18) D11 0 ··· 0
2 2  0 D22
 ··· 0 

The transpose of the symmetric and the antisymmetric part of a square matrix are given by, D = [Dik ] =  .. .. ... ..  . (3.2.25)
 . . . 
1¡ ¢T 1¡ T ¢ 0 0 · · · Dnn
ST = M + MT = M + M = S, and (3.2.19) (n×n)
2 2
1¡ ¢T 1¡ T ¢ Because the matrix multiplication is non-commutative, there exists two possibilities two compute
AT = M − MT = M − M = −A. (3.2.20)
2 2 the product of two matrices. The £rst possibility is the multiplication with the diagonal matrix
from the left-hand side, this is called the pre-multiplication
3.2.6 Transpose of a Matrix Product    
D11 a1 a1
The transpose of a matrix product of two matrices is de£ned by  D22 a  a 
 2  2
D A =  ..  ; A =  ..  . (3.2.26)
(A B)T = B T AT , and (3.2.21)  .  .
¡ ¢T ¡ ¢T Dnn an an
⇒ AT B T = B AT = B A, (3.2.22)
Each row of the matrix A, described by a so called row vector ai or a row matrix
for more than two matrices
£ ¤
ai = ai1 ai2 · · · ain , (3.2.27)
(A B C) = C T B T C T , etc. (3.2.23)
is multiplied with the matching diagonal element Dii . The result is the matrix D A in equation
The proof starts with the l × n-matrix C, which is given by the two matrices A and B
(3.2.26). The second possibility is the multiplication with the diagonal matrix from the right-
m
X hand side, this is called the post-multiplication
C (l×n) = A(l×m) B (m×n) ; Cik = Aiν Bνk . £ ¤ £ ¤
ν=1 A D = a1 D11 a2 D22 · · · an Dnn ; A = a1 , a2 , . . . , an . (3.2.28)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
46 Chapter 3. Matrix Calculus 3.2. Some Basic Identities of Matrix Calculus 47

Each column of the matrix A, described by a so called column vector ai or a column matrix The ball part is given by
 
ai1 n µ ¶
 ai2  1X 1 1
  Vii = Sii = tr S , or V = tr S . (3.2.32)
ai =  ..  , (3.2.29) n i=1 n n
 . 
ain
The deviator part is the difference between the matrix S and the volumetric part
is multiplied with the matching diagonal element Dii . The result is the matrix A D in equation
µ ¶
(3.2.28). 1
Rii = Sii − Vii , R=S− tr S , (3.2.33)
n
3.2.9 Exchanging Columns and Rows of a Matrix
the non-diagonal elements of the deviator are the elements of the former matrix S
Exchanging the i-th and the j-th row of the matrix A is realized by the pre-multiplication with
the matrix T Rik = Sik , i 6= k ,and R = RT . (3.2.34)
T (n×n) A(n×n) = Â(n×n) (3.2.30)
 i j
 The diagonal elements of the volumetric part are all equal
.. ..    
 1 . .  a11 a12 · · · · · · a1n a1  

i · · ·
 ai   aj  V
0 ··· ··· 1 · · ·
   

.. ..  ..   ..  
 V 


. 1 .  .   .  V = [V δik ] =  ... . (3.2.35)
 = .
... ..
   
.. .   . 

 . 1 .. 
    V
j · · · 1 ··· ··· 0 
· · · aj   ai 

.. .. an an
. . 1
The matrix T is same as its inverse T = T −1 . And with another matrix T̃ the i-th and the j-th
row are exchanged, too.
 i j

. .
 1
 .. .. 

 i · · · 0 · · · · · · −1 · · ·
 
 .. . . ..  ³ ´−1
T̃ =  . . .  , T̃ = T̃ T
 .. . . . .. 
 . . 
 
j · · · 1 · · · · · · 0 · · · 
 
.. ..
. . 1
Furthermore the old j-th row is multplied by −1. Finally post-multiplication with such a matrix
T̃ exchanges the columns i and j of a matrix.

3.2.10 Volumetric and Deviator Part of a Matrix


It is possible to split up every symmetric matrix S in a diagonal matrix (volumetric matrix) V
and in an antisymmetric matrix (deviator matrix) D
S (n×n) = V (n×n) + D(n×n) . (3.2.31)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
48 Chapter 3. Matrix Calculus 3.3. Inverse of a Square Matrix 49

3.3 Inverse of a Square Matrix 3.3.2.0.7 4. By exchanging two rows (or columns) the sign of the determinant changes.

3.3.1 De£nition of the Inverse 3.3.2.0.8 5. Multiplication with a scalar quantity is de£ned by
A linear equation system is given by ¡ ¢
det λA(n×n) = λn det A(n×n) ; λ ∈ R. (3.3.7)
A(n×n) x(n×1) = y (n×1) ; A = [Aik ] . (3.3.1)
3.3.2.0.9 6. The determinant of a product of two matrices is given by
The inversion of this system of equations introduces the inverse of a matrix A−1 .
det (A B) = det (B A) = det A det B. (3.3.8)
x = A−1 y ; A−1 := X = [Xik ] . (3.3.2)

The pre-multiplication of A x = y with the inverse A−1 implies 3.3.3 Derivation of the Elements of the Inverse of a Matrix
A−1 A x = A−1 y → x = A−1 y , and A−1 A = 1. (3.3.3) The n column vectors (n-tuples)
Pn ak (k = 1, . . . , n) of the matrix A and ak ∈ Rn are linearly
independent, i.e. the sum ν=1 αν aν 6= 0 is for all αν equal to zero
Finally the inverse of a matrix is de£ned by the following relations between a matrix A and its
 
inverse A−1 A1k
¡ −1 ¢−1 £ ¤  A2k 
A = A, (3.3.4) A(n×n) = a1 a2 · · · ak · · · an ; ak =  ..  .
 
(3.3.9)
 . 
A−1 A = A A−1 , (3.3.5)
Ank
[Aik ] [Xki ] = 1. (3.3.6)
The ak span a n-dimensional vector space. Than every other vector, the (n + 1)-th vector an+1 =
The solution of the linear equation system could only exist, if and only if the inverse A−1 exists.
r ∈ Rn , could be described by an unique linear combination of the former vectors ak , i.e. the
The inverse A−1 of a square matrix A exists, if the matrix is nonsingular (invertible), vector r ∈ Rn is linearly dependent of the n vectors ak ∈ Rn . For that reason the linear equation
i.e. det A 6= 0; or the difference between the rank and the number of columns resp. system
rows d = n − r of the matrix A must be equal to zero, i.e. the rank r of the matrix A(n×n) x(n×1) = r(n×1) ; r 6= 0 ; r ∈ Rn ; x ∈ Rn (3.3.10)
A(n×n) must be equal to the number n of columns or rows (r = n). The rank of a
has an unique solution
rectangular matrix A(n×n) is de£ned by the largest number of linearly independent
A−1 := X , A X = 1. (3.3.11)
rows (number of rows m) or columns (number of columns n). The smaller value of
m and n is the characteristic value of the rank. To compute the inverse X from the equation A X = 1 it is necessary to solve n-times the linear
equation system with the unit vector 1j (j = 1, . . . , n) on the right-hand side. Then the j-th
equation system is given by
3.3.2 Important Identities of Determinants
   
3.3.2.0.4 1. The determinant stays the same, if a row (or a column) is added to another row X1j 0
 X2j  0
(or column). .  .
£ 1 2 ¤.X.  .
 .
a a · · · ak · · · an  kj  =   ,
3.3.2.0.5 2. The determinante equals zero, if the expanded row (or column) is exchanged by   1 
 .  .
another row (or column). In this case two rows (or columns) are the same, i.e. these rows (or  ..   .. 
columns) are linearly dependent. Xnj 0
A X j = 1j , (3.3.12)
3.3.2.0.6 3. This is the generaliziation of the £rst and second rule. The determinant equals
zero, if the rows (or columns) of the matrix are linearly dependent. In this case it is possible to with the inverse represented by its column vectors
produce a row (or column) with all elements equal to zero, and if the determinant is expanded £ ¤
about this row (or column) the determinant itself equals zero. X = A−1 = X 1 X 2 · · · X j · · · X n (3.3.13)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
50 Chapter 3. Matrix Calculus 3.3. Inverse of a Square Matrix 51

and the identity matrix also represented by its column vectors This result is the same like the result by using the determinant expansion by minors. In this
£ ¤ example the determinant is expanded about its £rst row. In general the determinant is expanded
1 = 1 1 1 2 · · · 1 j · · · 1n , (3.3.14) about the i-th row like this
n · ¸ n
X ∗ X
and £nally the identity matrix itself det A(n×n) = Aij det Aij (−1)i+j = Aij Âij . (3.3.21)
  j=1 j=1
1 0 ··· 0 · ¸

0 1 ··· 0

1 =  .. .. . .

..  . (3.3.15) Aij is a matrix, created by eliminating the i-th row and the j-th column of A. The factor Âij
. . . .
is the so called cofactor of the element Aij . For this factor again the determinant expansion is
0 0 ··· 1
used.
The solutions represented by the vectors X j could be computed with the determinants. Example: Simple 3 × 3-matrix. The matrix A
 
1 4 0
3.3.4 Computing the Elements of the Inverse with Determinants A= 2 1 1
The determinant det A(n×n) of a square matrix Aik ∈ R is a real number, de£ned by Leibnitz like −1 0 2,
this X is expanded about the £rst row
det A(n×n) = (−1)I A1j A2k A3l · · · Ann . (3.3.16)
· ¸
The indices j, k, l, . . . , n are rearranged in all permutations of the numbers 1, 2, . . . , n and I is the 1 1
det A =1 · (−1)1+1 · det
total number of inversions. The determinant det A is established as the sum of all (n!) elements. 0 2
· ¸
In every case there exists the same number of positive and negative terms. For example the 2 1
+ 4 · (−1)1+2 · det
determinant det A of an 3 × 3-matrix is computed −1 2
· ¸
  2 1
A11 A12 A13 + 0 · (−1)1+3 · det ,
−1 0
A(3×3) = A = A21 A22 A23  . (3.3.17)
A31 A32 A33 and £nally the result is
An even permutation of the numbers 1, 2, . . . , 3 is a sequence like this,
=1 · 1 · 2 − 4 · 1 · 5 + 0 · 1 · 1 = −18.
1→2→3 , or 2→3→1 , or 3 → 1 → 2, (3.3.18)
In order to compute the inverse X of a matrix the determinant det A is calculated by expanding
and an odd permutation is a sequence like this, the i-th row of the matrix A. The matrix A(n×n) is assumpted to be linearly independent, i.e.
det A 6= 0. Equation (3.3.21) implies
3→2→1 , or 2→1→3 , or 1 → 3 → 2. (3.3.19)
n
X
For this example with n = 3 equation (3.3.16) becomes Aij Âij = det A = 1 · det A. (3.3.22)
j=1

det A =A11 A22 A33 + A12 A23 A31 + A13 A21 A32
The second rule about determinants implies that exchanging the expanded row i by the row k
− A31 A22 A13 − A32 A23 A11 − A33 A21 A12 leads to an linearly dependent matrix,
=A11 (A22 A33 − A32 A23 )
n
X
− A12 (A21 A33 − A31 A23 ) Akj Âij = 0 = 0 · det A if i 6= k, (3.3.23)
+ A13 (A21 A32 − A31 A22 ) . (3.3.20) j=1

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
52 Chapter 3. Matrix Calculus 3.3. Inverse of a Square Matrix 53

or Proof. Start with the assumption


n
X (A B)−1 (A B) = 1,
Aij Âkj = 0 = 0 · A if i 6= k. (3.3.24)
j=1
using equation (3.3.31) this implies
The de£nition of the Kronecker delta δ ik is given by
B −1 A−1 A B = 1,
 
( 1 0 ··· 0
1,iff i=k 0 1 ··· 0 and £nally
 
δik = ; [δik ] =  .. .. . . ..  = 1. (3.3.25)
0,iff i 6= k . . . . B −1 1 B = 1.
0 0 ··· 1

Equation (3.3.22) is rewritten with the de£ntion of the Kronecker delta,


n
X 3.3.5.0.11 2. The inverse of the triple matrix product is given by
Aij Âkj = δik det A. (3.3.26)
j=1 (A B C)−1 = C −1 B −1 A−1 . (3.3.32)

The elements xjk of the inverse X = A−1 are de£ned by 3.3.5.0.12 3. The order of inversion and transposition could be exchanged,
n
¡ −1 ¢T ¡ T ¢−1
X A = A . (3.3.33)
Aij Xjk = δik ; A X = 1, (3.3.27)
j=1 Proof. The inverse is de£ned by
¡ ¢T ¡ ¢T
and comparing (3.3.26) with (3.3.27) implies A A−1 = 1 = A A−1 = A−1 AT ,

Âkj and this £nally implies


Xjk = ; [Xjk ] = A−1 . (3.3.28)
det A ¡ ¢−1 ¡ ¢−1 ¡ −1 ¢T
AT AT = 1 → AT = A .
If the matrix is symmetric, i.e. A = AT , the equations (3.3.26) and (3.3.27) imply

Âjk
Xjk = ; [Xjk ] = A−1 , (3.3.29) 3.3.5.0.13 4. If the matrix A is symmetric, then the inverse matrix A−1 is symmetric, too,
det A
¡ ¢T
and £nally A = AT → A−1 = A−1 . (3.3.34)

¡ ¢T 3.3.5.0.14 5. For the diagonal matrix D the following relations hold,


A−1 = A−1 . (3.3.30)
n
Y
det D = Dii , (3.3.35)
3.3.5 Inversions of Matrix Products
·i=1 ¸
3.3.5.0.10 1. The inverse of a matrix prodcut is given by 1
D−1 = . (3.3.36)
−1
Dii
−1 −1
(A B) =B A . (3.3.31)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
54 Chapter 3. Matrix Calculus 3.4. Linear Mappings of an Af£ne Vector Spaces 55

3.4 Linear Mappings of an Af£ne Vector Spaces


1...k...n
3.4.1 Matrix Multiplication as a Linear Mapping of Vectors 1
.. B
The linear mapping is de£ned by .
..
.
y=Ax with x ∈ Rm , y ∈ Rl , and A ∈ Rl×m , (3.4.1) m
1.........m
and its components are 1
· C
 i A Cik
 y1 = A11 x1 + . . . + A1j xj + . . . + A1m xm ·

 l
 ...


n
X 

yi = Aij xj ⇔ yi = Ai1 x1 + . . . + Aij xj + . . . + Aim xm . (3.4.2) Figure 3.2: Matrix multiplication for a composition of matrices.

 ..
j=1 


 .

y = A x + . . . + A x + . . . + A x
l l1 1 lj j lm m
3.4.2 Similarity Transformation of Vectors
This linear function describes a mapping of the m-tuple (vector) x onto the l-tuple (vector) y
with a matrix A. Furthermore the vector x ∈ Rm is described by a linear mapping with a matrix For a square matrix A a linear mapping is de£ned by
B and a vector z ∈ Rn
y=Ax , with x, y ∈ Rn , and A ∈ Rn×n . (3.4.9)
x=Bz , with x∈R m
, z∈R n
, and B ∈ R m×n
, (3.4.3)
The two vectors x and y are described by the same linear mapping and the same nonsingular
with the components square matrix T and the vectors x̄ and ȳ. The vectors are called to be similar, because they are
n
X transformed in the same way
xj = Bik zk . (3.4.4)
k=1 x = T x̄ , with x, x̄ ∈ Rn , T ∈ Rn×n , and det T 6= 0. (3.4.10)
Inserting equation (3.4.4) in equation (3.4.2) implies y = T ȳ , with y, ȳ ∈ Rn , T ∈ Rn×n , and det T 6= 0. (3.4.11)
m m
à n
! n
à m ! n
X X X X X X Inserting this relations in equation (3.4.9) implies
yi = (Aij xj ) = Aij Bjk zk = Aij Bjk zk = Cik zk . (3.4.5)
j=1 j=1 k=1 k=1 j=1 k=1
T −1 · | T ȳ = A T x̄, (3.4.12)
With this relation the matrix multiplication is de£ned by ȳ = T −1 A T x̄, (3.4.13)
A B = C, (3.4.6)
and £nally
with the components Cik given by
A = T −1 A T . (3.4.14)
m
X
Aij Bjk = Cik . (3.4.7) The matrix A = T −1 A T is the result of the so called similarity transformation of the matrix
j=1
A with the nonsingular transformation matrix T . The matrices A and A are said to be similar
The matrix multiplication is the combination or the composition of two linear mappings matrices.
¾
y=Ax
⇒ y = A (B z) = (A B) z = C z. (3.4.8) 3.4.3 Characteristics of the Similarity Transformation
x=Bz
Similar matrices A and A have some typical characteristics . . .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
56 Chapter 3. Matrix Calculus 3.4. Linear Mappings of an Af£ne Vector Spaces 57

• the determinants are equal 3.4.5 Characteristics of the Congruence Transformation


¡ −1
¢
det A = det T AT , Congruent matrices A and A have some typical characteristics . . .
−1 −1 1
= det T det A det T , and det T = , • the congruence transformation keeps the matrix A symmetric
det T
det A = det A. (3.4.15) condition: A = AT , (3.4.23)
T
assumption: A = A , (3.4.24)
• the traces are equal
T T T T T T
proof: A = (T A T ) = T A T = T A T = A. (3.4.25)
tr (A B) = tr (B A) , (3.4.16)
¡ ¢
tr A = tr T −1 A T , • the product P = xT y = xT A x is an invariant scalar quantity
¡ ¢
= tr A T T −1 ,
assumption: P = xT y = P̄ = x̄T ȳ, (3.4.26)
tr A = tr A. (3.4.17) (
x̄ = T −1 x
proof: x = T x̄ ⇒ , with det T 6= 0. (3.4.27)
• the same eigenvalues. ȳ = T T y
¡ ¢T
• the same characteristic polynomial. x̄T ȳ = T −1 x T T y
¡ ¢T
= xT T −1 T T y
3.4.4 Congruence Transformation of Vectors ¡ ¢ −1 T
= xT T T T y
Let y = A x a be linear mapping
= xT y
y=Ax , with x, y ∈ Rn , and A ∈ Rn×n . (3.4.18)
The scalarproduct P = xT y = xT A x is also called the quadratic form of the vector x.
with a square matrix A. The vectors x and y are computed in an opposite way (kontragredient) The quantity P could describe a mechnical work, if the elements of the vector x describe
with the nonsingular square matrix T and the vectors x̄ and ȳ a displacement and the components of the vector y describe the assigned forces of a static
system. The invariance of this work under a congruent transformations is important for
x = T x̄ , with x, x̄ ∈ Rn , T ∈ Rn×n , and det T 6= 0. (3.4.19)
numerical mechanics, e.g. for the £nite element method.
The ȳ is the result of the mutliplication of the transpose of the matrix T and the vector y
ȳ = T T y , with y, ȳ ∈ Rn , T ∈ Rn×n , and det T 6= 0. (3.4.20) 3.4.6 Orthogonal Transformation
Inserting equation (3.4.19) in equation (3.4.18) implies Let the square matrix A describe a linear mapping
T
T ·| y = A T x̄, y=Ax , with x, y ∈ Rn , and A ∈ Rn×n . (3.4.28)

and comparing this with equation (3.4.20) implies The vectors x and y will be transformed in the similar way and in the congruent way with the so
called orthogonal matrix T = Q, det Q 6= 0.
T T y = T T A T x̄,
x = Q x̄ , y = Q ȳ ⇒ ȳ = Q−1 y → similar transformation, (3.4.29)
and £nally T
ȳ = Q y → congruent transformation. (3.4.30)
ȳ = A x̄, (3.4.21)
For the orthogonal transformation the transformations matrices are called to be orthogonal, if
A = TTA T. (3.4.22) they ful£ll the relations
The matrix product A = T T A T is called the congruence transformation of the matrix A. The Q−1 = QT or Q QT = 1. (3.4.31)
matrices A and A are called to be congruent matrices. For the orthogonal matrices the following identities hold.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
58 Chapter 3. Matrix Calculus 3.4. Linear Mappings of an Af£ne Vector Spaces 59

• If a matrix is orthogonal, its inverse equals the transpose of the matrix. and

• The determinant of an orthogonal matrix has the value +1 or −1, ȳ = QT y, (3.4.35)


· ¸ · ¸· ¸
ȳ1 cos α sin α y1
det Q = ±1. (3.4.32) = ⇒ ȳ = QT y. (3.4.36)
ȳ2 − sin α cos α y2

• The product of orthogonal matrices is again orthogonal. The inversion of equation (3.4.33) with the aid of determinants implies

• An Orthogonal matrix A with det A = +1 is called a rotation matrix. Q̂ki


ȳ = Q−1 y ; Q−1 := x ; Xik = . (3.4.37)
det Q

Solving this equations step by step, starting with computing the determinant,
y2 sin α
:
y1 cos α :9 det Q = cos2 α + sin2 α = 1, (3.4.38)

O9
the general form of the equation to compute the elements of the inverse of the matrix,
ȳ2 -axis y2 -axis 6
O ∗
Q̂ki = (−1)k+i det Qki , (3.4.39)
6 y2
ȳ1 sin α y2 cos α
the different elements
?
6 ȳ2 α Q22
X11 = = cos α, (3.4.40)
1
y1 sin α
ȳ2 cos α X12 = (−1)3 Q12 = (−1)3 (− sin α) = + sin α, (3.4.41)
: O
ȳ1 -axis X21 = (−1)3 Q21 = (−1)3 (− sin α) = − sin α, (3.4.42)
ȳ1 W
W X22 = Q11 = cos α, (3.4.43)
? α -
y1 y1 -axis and £nally
ȳ2 sin α
¾ -
· ¸
cos α − sin α
ȳ1 cos α X = Q−1 = . (3.4.44)
¾ - sin α cos α

Comparing this result with equation (3.4.35) leads to


Figure 3.3: Orthogonal transformation.
Q−1 = QT . (3.4.45)

The most important usage of this rotation matrices is the rotation transformation of coordinates. 3.4.7 The Gauss Transformation
For example the rotation transformation in R2 is given by
Let A(m×n) be a real valued matrix, Aik ∈ R. If m > n, then the matrix A is nonsingular w.r.t.
y = Q ȳ, (3.4.33) the columns, i.e. the column vectors are linearly independent. The Gauss transformation is
· ¸ · ¸· ¸ de£ned by
y1 cos α − sin α ȳ1
= ⇒ y = Q ȳ, (3.4.34)
y2 sin α cos α ȳ2 B = AT A , with B ∈ Rn×n , AT ∈ Rn×m , and A ∈ Rm×n . (3.4.46)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
60 Chapter 3. Matrix Calculus 3.4. Linear Mappings of an Af£ne Vector Spaces 61

The matrix B is symmetric, i.e. For example


B = BT , (3.4.47)  
2 1 · ¸
13 −4
because A =  0 3  ; AT A = ,
−4 14
B T = (AT A)T = AT A = B. (3.4.48) −3 2
p √ sX
If the rows and columns A are nonsingular, then the matrix B is nonsingular, i.e. the determinant N (A) = 22 + 12 + 32 + (−3)2 + 22 = 3 3 = A2ik ,
is not equal to zero, i,k

det B 6= 0. (3.4.49) √ √ q ¡ ¢
= 13 + 14 = 3 3 = tr AT A .
This product was introduced by Gauss in order to compute the so called normal equation. The
matrix A is given by The matrix B = AT A is positive de£nite.
 
 
 
A= a a
 1 2 · · · a 
n ai = i-th column vector of A, (3.4.50)
 

and the matrix B is computed by

→ B = AT A, (3.4.51)
  
aT1
 aT   
 2 
=  ..   a1 a2 · · · an 
,
 .   
T
an
 T 
a1 a1 aT1 a2 · · · aT1 an
 T .. 
a a aT a ··· . 
=  2. 1 2. 2 ... ..  . (3.4.52)
 .. .. . 
aTn a1 · · · T
· · · an an n×n

An element Bik of the product matrix is the scalar product of the i-th column vector with the k-th
column vector of A,
Bik = aTi ak . (3.4.53)
The diagonal elements are called the quadratic value of the norm of the column vectors and this
value is always positive (ai 6= 0). The sum, i.e. the trace of the product AT A or the sum of all
A2ik , is the quadratic valued of a matrix norm, called the Euklidian matrix norm N (A),
q sX
N (A) = tr(AT A) = A2ik . (3.4.54)
i,k

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
62 Chapter 3. Matrix Calculus 3.5. Quadratic Forms 63

3.5 Quadratic Forms The vector x is treated by the nonsingular transformation T

3.5.1 Representations and Characteristics x = T y, (3.5.10)


⇒ α = y T T T A T y,
Let y = A x be a linear mapping
α = y T B y. (3.5.11)
y = A x , with x ∈ Rn , y ∈ Rn , and Aik ∈ R, (3.5.1)
The matrix A transforms like,
with the nonsingular, square and symmetric matrix A, i.e. B = TTA T, (3.5.12)

det A 6= 0 , and A = A . T
(3.5.2) where T is a real nonsingular matrix. Than the matrices B and A are called to be congruent to
each other,
c
the product A ∼ B. (3.5.13)
α := xT y = xT A x. (3.5.3) The congruence transformation bewares the symmetry of the matrix A, because the following
is a real number, α ∈ R, and is called the quadratic form of A. The following conditions hold equation holds,
B = BT . (3.5.14)
α = αT , scalar quantities are invariant w.r.t. to transposition, and (3.5.4)
T T T T T
α=x Ax=α =x A x , because A = A , (3.5.5) 3.5.3 Derivatives of a Quadratic Form
i.e. the matrix A must be symmetric. The scalar quantity α and than the matrix A, too, are called The quadratic form
to be positive de£nite (or negative de£nite), if the following conditions hold, α = xT A x , and AT = A. (3.5.15)

should be partial derived w.r.t. the components of the vector x. The result forms the column
 > 0 , for every x 6= 0

matrix ∂α ,
α = xT A x (<) . (3.5.6) ∂x
 

 0
= 0 , iff x = 0
 0
.
It is necessary, that the determinant does not equal zero det A 6= 0, i.e. the matrix A must be ∂x .
.
nonsingular. If there exists a vector x 6= 0, such that α = 0, then the form α = xT A x is called =   = ei , i-th unit vector, (3.5.16)
∂xi 1
semide£nite. In this case if the matrix A is singular, i.e. det A = 0, then the homogenous system .
 .. 
of equations,
0
¡ ¢
Ax=0 → xT A x = 0 , iff x 6= 0 , and det A = 0 , (3.5.7) and
∂xT £ ¤
or resp. = 0 0 · · · 1 · · · 0 = eTi . (3.5.17)
∂xi
a1 x1 + a2 x2 + . . . + an xn = 0, (3.5.8) With equations (3.5.16) and (3.5.17) the derivative of the quadratic form is given by,
has got only nontrivial solutions, because of the linear dependence of the columns of the matrix
A. The condition xT A x = 0 could only hold, iff the vector is nonequal to zero, x 6= 0, and the ∂α
= eTi A x + xT A ei . (3.5.18)
determinant of the matrix A equals zero, det A = 0. ∂xi
With the symmetry of the matrix A
3.5.2 Congruence Transformation of a Matrix A = AT , (3.5.19)
T
Let α = x A x be a quadratic form, given by the second part of equation (3.5.18) is rewritten as,
¡ T ¢T
α = xT A x , and AT = A. (3.5.9) x A ei = eTi AT x = eTi A x, (3.5.20)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
64 Chapter 3. Matrix Calculus 3.6. Matrix Eigenvalue Problem 65

and £nally 3.6 Matrix Eigenvalue Problem


∂α
= 2 eTi A x. (3.5.21)
∂xi 3.6.1 The Special Eigenvalue Problem
∂α
The quantity eTi A x is the i-th component of the vector A x. Furthermore the n derivatives ∂xi For a given linear mapping
are combined as a column matrix
 ∂α    y (n×1) = A(n×n) x(n×1) , with x, y ∈ Rn , xi , yi , Aik ∈ F , and det A 6= 0, (3.6.1)
∂x1 1 0 ··· 0
∂α 
0 1 · · · 0 

∂α  ∂x2   
=  .  = 2  .. .. . . ..  A x £nd the vectors x0 in the direction of the vectors y 0 ,
∂x  ..  . . . .
∂α
∂xn
0 · · · · · · 1 y 0 = λ x0 , and λ ∈ F. (3.6.2)
= 21 A x
The directions associated to the eigenvector x0 is called a principal axis. The whole task is
∂α described as the principal axes problem of the matrix A. The scalar quantity λ is called the
= 2A x. (3.5.22)
∂x eigenvalue, because of this de£nition the whole problem is also called the eigenvalue problem.
The equation (3.6.2) could be rewritten like this

y 0 = λ 1 x0 , (3.6.3)

and inserting this in equation (3.6.1) implies

y 0 = A x0 = λ 1 x0 , (3.6.4)

and £nally the special eigenvalue problem,

(A − λ1) x0 = 0. (3.6.5)

The so called special eigenvalue problem is characterized by the eigenvalues λ distributed only
on the main diagonal. For the homogeneous linear equation system of the x0 exists a trivial
solution x0 = 0. A nontrivial solution exists only if this condition is ful£lled,

det (A − λ1) = 0. (3.6.6)

This equation is called the characteristic equation, and the left-hand side det (Aλ − 1) is called
the characteristic polynomial. The components of the vector x0 are yet unknown. The vector
x0 could be computed by determing the norm, because the principal axes are searched. Solving
the determinant implies for a matrix with n rows a polynomial of n-th degree. The roots or
sometimes also called the zeros of this equation or polynomial are the eigenvalues.

p (λ) = det (λ1 − A) = λn + an−1 λn−1 + . . . + a1 λ + a0 . (3.6.7)

The £rst and the last coef£cient of the polynomial are given by

an−1 = − tr A, and (3.6.8)


a0 (−1)n = det A. (3.6.9)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
66 Chapter 3. Matrix Calculus 3.6. Matrix Eigenvalue Problem 67

With the polynomial factorization the equation p (λ) or the polynomial (3.6.7) could be described With A being symmetric
by ¡ ¢T
p (λ) = (λ − λ1 ) (λ − λ2 ) · . . . · (λ − λn ) . (3.6.10) α = xT1 A x2 = xT1 A x2 = xT2 A x1 , and A = AT , (3.6.19)
Comparing this with Newton’s relation for a symmetric polynomial the equations (3.6.8) and
(3.6.9) could be rewritten like this, results

tr A = λ1 + λ2 + . . . + λn , and (3.6.11) (λ1 − λ2 ) xT1 x2 = 0, (3.6.20)


det A = λ1 · λ2 · . . . · λn . (3.6.12)
and because (λ1 − λ2 ) 6= 0, and the scalar product xT1 x2 = 0 must equal zero, this means that
Because the associated eigenvectors to each eigenvalue could not be computed explict, the whole the vectors are orthogonal
equation is normed x1 ⊥x2 , iff λ1 6= λ2 . (3.6.21)
y 0i = A xi0 = λi x0i , with i = 1, 2, 3, . . . , n, (3.6.13) Furthermore hold the

and the eigenvectors x0i . If for example the matrix (Aλ − 1) has the reduction of rank d = 1 and
(1) 3.6.1.0.17 2nd Rule. A real, nonsingular and symmetric square matrix with n rows has exact
the vector x0i is an arbitrary solution of the eigenvalue problem, then the equation,
n real eigenvalues λi , being the roots of its characterstic equation.
(1)
x0i = cx0i , (3.6.14)
3.6.1.0.18 Proof. Let the eigenvalues be complex numbers, given by
with the parameter c represents the general solution of the eigenvalue problem. If the reduction
of rank of the matrix is larger than 1, then there exist d > 1 linearly independent eigenvectors λ1 = β + iγ , and λ2 = β − iγ, (3.6.22)
x1 . As a rule of thumb,
than the eigenvectors are given by
eigenvectors of different eigenvalues are linearly independent.
For a symmetric matrix A the following identities hold. x1 = b + ic , and x2 = b − ic. (3.6.23)

Inserting this relations in the above orthogonality condition,


3.6.1.0.15 1st Rule. The eigenvectors x0i of a nonsingular and symmetric matrix A are or-
thogonal to each other. (λ1 − λ2 ) xT1 x2 = 0, (3.6.24)

3.6.1.0.16 Proof. Let the vectors x0i = xi , x1 and x2 be eigenvectors implies ¡ ¢¡ ¢


(λ1 − λ2 ) bT + icT bT − icT = 0, (3.6.25)
(A − λ1 1) x1 = 0, (3.6.15)
and £nally ¡ ¢
muliplied with the vector x2 from the left-hand side, 2iγ bT b + cT c = 0. (3.6.26)
¡ T T
¢
xT2 (A − λ1 1) x1 = 0, (3.6.16) This equation implies γ = 0, because the term b b + c c 6= 0 is nonzero, i.e. the eigenvalues
are real numbers.
and also

xT1 (A − λ2 1) x2 = 0, (3.6.17)
3.6.2 Rayleigh Quotient
The largest eigenvalue λ1 of a symmetric matrix could be estimated with the Rayleigh quotient.
and £nally equation (3.6.16) subtracted from equation (3.6.17), The special eigenvalue problem y = A x = λx, or

−xT2 A x1 + λ1 xT2 x1 + xT1 A x2 + λ2 xT1 x2 = 0. (3.6.18) (A − λ1) x = 0 , with A = AT , det A 6= 0 , Aij ∈ R, (3.6.27)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
68 Chapter 3. Matrix Calculus 3.6. Matrix Eigenvalue Problem 69

has the n eigenvalues, in order of magnitude, 3.6.3 The General Eigenvalue Problem
The general eigenvalue problem is de£ned by
|λ1 | ≥ |λ2 | ≥ |λ3 | ≥ . . . λn , with λ ∈ R. (3.6.28)
A x = λB x, (3.6.36)
For large matrices A the setting-up and the solution of the characteristic equation is very compli-
(A − λB) x = 0, (3.6.37)
cated. Furthermore for some problems it is suf£cient to know only the largest and/or the smallest
eigenvalue, e.g. for a stability problem only the smallest eigenvalue is of interest, because this is with the matrices A and B being nonsingular. The eigenvalues λ are multiplied with an arbitrary
the critical load. Therefore the so called direct method to compute the approximated eignevalues matrix B and not with the identity matrix 1. This problem is reduced to the special eigenvalue
λ1 by von Mises is interesting. For using this method it is necessary to compute the inverse be- problem by multiplication with the inverse of matrix B form the left-hand side,
fore starting with the actual method to determine the smallest, critical load case. This so called
¡ −1 ¢
von Mises iteration is given by B A − λ1 x = 0, (3.6.38)
z ν = A z ν−1 = Aν z 0 . (3.6.29) ¡ −1 ¢
B A − λ1 x = 0. (3.6.39)
In this iterative process the vector z ν converges to x1 , i.e. the vector converges to the eigen-
Even if the matrices A and B are symmetric, the matrix C = B −1 A is in general a nonsymmetric
value λ1 with the largest absolute value. The starting vector z 0 is represented by the linearly
matrix, because the matrix multiplication is noncommutative.
independent eigenvectors xi ,

z 0 = C1 x1 + C2 x2 + . . . + Cn xn 6= 0, (3.6.30) 3.6.4 Similarity Transformation


In the special eigenvalue problem
and an arbitrary vector is given by
A x = y = λx = λ1 x, (3.6.40)
z ν = λν1 C1 x1 + λν2 C2 x2 + . . . + λνn Cn xn 6= 0. (3.6.31)
the vectors are transformed like in a similar transformation,
If the condition |λ1 | ≥ |λ2 | ≥ |λ3 | ≥ . . . ≥ λn holds, then with the raising value of ν the vector
z u converges to the eigenvector x1 multiplied with a constant c1 , x = T x̃ , and y = T ỹ , with ỹ = λ1 x̃. (3.6.41)

z ν → λν1 c1 x1 , (3.6.32) The transformation matrix T is nonsingular, i.e. det T 6= 0, and T ik ∈ R. This implies
z ν+1 → λ1 z ν . (3.6.33) A T x̃ = λT x̃,

A component is given by, T −1 A T x̃ = λx̃ = 0,


¡ −1 ¢
(ν) T A T − λ1 x̃ = 0, and (3.6.42)
(ν) zi ³ ´
qi = → λ1 . (3.6.34)
(ν−1)
zi à − λ1 x̃ = 0. (3.6.43)

The convergence will be better, if the ratio |λ1 | / |λ2 | increases. A very good approximated value The determinant of the inverse of matrix T is given by
Λ1 for the dominant (largest) eigenvalue λ1 is established with the so called Rayleigh quotient,
¡ ¢ 1
det T −1 = , (3.6.44)
zT z zT A z det T
Λ1 = R [z ν ] = ν T ν+1 = ν T ν , with Λ 1 ≤ λ1 . (3.6.35)
zν zν zν zν
and the determinant of the product is split in the product of determinants,
The numerator and the denominator of the Rayleigh quotient include scalar products of the ap- ¡ ¢ ¡ ¢
(ν) det T −1 A T = det T −1 det A det T , (3.6.45)
proximated vectors. For this reason the information of all components q i are used in this
approximation. det à = det A.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
70 Chapter 3. Matrix Calculus 3.6. Matrix Eigenvalue Problem 71

3.6.4.0.19 Rule. The eigenvalues of the matrix A do not change if the matrix is transformed 3.6.6 Cayley-Hamilton Theorem
into the similar matrix Ã,
³ ´ The Cayley-Hamilton Theorem says, that an arbitrary square matrix A satis£es its own charac-
¡ ¢
det T −1 A T − λ1 = det à − λ1 = det (A − λ1) = 0. (3.6.46) teristic equation. If the characteristic polynomial for the matrix A is
p (λ) = det (λ1 − A) , (3.6.57)
3.6.5 Transformation into a Diagonal Matrix = λn + an−1 λn−1 + . . . + a1 λ + a0 , (3.6.58)
The nonsingular symmetric matrix A with n rows contains n linearly independent eigenvec- then the matrix A solves the Cayley-Hamilton equation
tors xi , if and only if for any multiple eigenvalue λσ (, i.e. multiple roots of the characteristic
p (A) = A + an−1 An−1 + . . . + a1 A + a0 1 = 0, (3.6.59)
polynomial) of multiplicity pσ the reduction of rank dσ = pσ (, with σ = 1, 2, . . . , s) for the
characteristic matrix (A − λ1) equals the multiplicity of the multiple eigenvalue. The quantity s and the matrix A to the power n is given by
describes the number of different eigenvalues. The n linearly independent normed eigenvectors An = A A . . . A. (3.6.60)
xi of the matrix A are combined as column vectors to form the nonsingular eigenvector matrix,
n
The matrix with the exponent n, written like A , could be described by a linear combination
X = [x1 , x2 , . . . , xn ] , with det X 6= 0. (3.6.47) of the matrices with the exponents n − 1 up to 0, resp. An−1 till A0 = 1. If the matrix A is
The equation of eigenvalues given by nonsingular, then also negative quantities as exponents are allowed, e.g.
A−1 = p (A) = 0, (3.6.61)
A xi = λ i xi , (3.6.48) µ ¶
1 ¡ ¢
A X = [A x1 , A x2 , . . . , A xn ] = [λ1 x1 , λ2 x2 , . . . , λn xn ] , (3.6.49) − A n−1
+ an−1 A n−2
+ . . . + a1 = 0 , a0 6= 0. (3.6.62)
  a0
λ1 0 · · · 0
 ..  Furthermore the power series P (A) of a matrix A, with the eigenvalues λ σ appearing µσ -times
 0 λ2 .
[λ1 x1 , . . . , λn ] = [x1 , x2 , . . . , xn ]  . . . . ..  , (3.6.50) in the minimal polynomial, converges, if and only if the usual power series converges for all
 .. . eigenvalues λσ of the matrix A. For example
0 · · · · · · λn £ A¤ 1 1
[λ1 x1 , . . . , λn ] = X Λ. (3.6.51) e = 1 + A + A 2 + A3 + . . . , (3.6.63)
2! 3!
1 2 1 4
Combining the results implies [cos (A)] = 1 − A + A − + . . . , (3.6.64)
2! 4!
A X = X Λ, (3.6.52) 1 3 1 5
and £nally [sin (A)] = A − A + A − + . . . . (3.6.65)
3! 5!
X −1 A X = X Λ , with det X 6= 0. (3.6.53)
Therefore the diagonal matrix of eigenvalues Λ could be computed by the similarity transforma- 3.6.7 Proof of the Cayley-Hamilton Theorem
tion of the matrix A with the eigenvector matrix X. In the opposite direction a transformation A vector z ∈ Rn is represented by a combination of the linearly independent eigenvectors xi of
matrix T must ful£ll some conditions, in order to transform a matrix A by a similarity transfor- the matrix A similar to a diagonal matrix with n rows,
mation into a diagonal matrix, i.e.
z = c 1 x 1 + c 2 x2 + . . . + c n xn , (3.6.66)
T −1 A T = D = dDi ic, (3.6.54)
with the ci called the evaluation coef£cients. Introducing some basic vectors and matrices, in
or order to establish the evaluation theorem,
AT =T D , with T = [t1 , . . . , tn ] , (3.6.55)
X = [x1 , x2 , . . . , xn ] , (3.6.67)
and £nally
c = [c1 , c2 , . . . , cn ]T , (3.6.68)
A ti = Dii ti . (3.6.56)
X c = z, and (3.6.69)
The column vectors ti of the transformation matrix T are the n linearly independent eigenvectors
of the matrix A with the associated eigenvalues λi = Dii . c = X −1 z. (3.6.70)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
72 Chapter 3. Matrix Calculus 3.6. Matrix Eigenvalue Problem 73

Let z 0 be an arbitrary real vector to start with, and establish some iterated vectors leads to the result
¡ ¢
z1 = A z0, (3.6.71) a0 z 0 + a1 z 1 + . . . + z n = a0 + a1 λ1 + . . . + an−1 λ1n−1 + λn1 c1 x1
¡ ¢
z 2 = A z 1 = A2 z 0 , (3.6.72) + a0 + a1 λ2 + . . . + an−1 λ2n−1 + λn2 c2 x2
... ..
.
¡ ¢
z n = A z n−1 = An z 0 . (3.6.73) + a0 + a1 λn + . . . + an−1 λnn−1 + λnn cn xn . (3.6.82)

The n + 1 vectors z 0 till z n are linearly dependent, because every n + 1 vectors in Rn must be With equations (3.6.71)-(3.6.73),
linearly dependent. The characteristic polynomial of the matrix A is given by
(a0 1 + a1 A + . . . + An ) z 0 = p (λ1 ) c1 x1 + p (λ2 ) c2 x2 + . . . + p (λn ) cn xn ,
p (λ) = det (λ1 − A) , (3.6.74) p (A) z 0 = 0 · c1 x1 + 0 · c2 x2 + . . . + 0 · cn xn ,
= a0 + a1 λ + . . . + an−1 λn−1 + λn . (3.6.75)
and £nally
The relation between the starting vector z 0 and the £rst n iterated vectors z i is given by the
following equations, the evaluation theorem p (A) z 0 = a0 z 0 + a1 z 1 + . . . + z n = 0. (3.6.83)
z 0 = c 1 x 1 + c 2 x2 + . . . + c n xn , (3.6.76)
Inserting the iterated vectors z k = ak z 0 , see equations (3.6.71)-(3.6.73), in equation (3.6.83)
and the eigenvalue problem leads to,

A xi = λ i xi , and p (λ) = det (λ1 − A) = 0. (3.6.77) p (A) z 0 = a0 z 0 + a1 A z 0 + . . . + An z 0 = 0, (3.6.84)


= (a0 1 + a1 A + . . . + An ) z 0 = 0, (3.6.85)
The n vectors z 0 till z n are iterated by
and with an arbitrary vector z 0 the term in brackets must equal the zero matrix,
z0 = z0, (3.6.78)
a0 1 + a1 A + . . . + An = 0. (3.6.86)
and
In other words, an arbitrary square matrix A solves its own characteristic equation. If the char-
z 1 = A; z 0 , acteristic polynomial of the matrix A is given by equation (3.6.74), then the matrix A solves the
z 1 = c1 A; x1 + c2 A; x2 + . . . + cn A; xn , so called Cayley-Hamilton equation,
z 1 = λ 1 c 1 x1 + λ 2 c 2 x2 + . . . + λ n c n xn , (3.6.79)
p (A) = a0 1 + a1 A + . . . + An = 0. (3.6.87)
..
.
The polynomial p (A) of the matrix A equals the zero matrix, and the a i are the coef£cients of
z n = λn1 c1 x1 + λn2 c2 x2 + . . . + λnn cn xn , (3.6.80)
the characteristic polynomial of matrix A,
and £nally summed like this, with a n = 1,
p (λ) = det (λ1 − A) = λn + an−1 λn−1 + . . . + a1 λ + a0 = 0. (3.6.88)
z0 = z0 | · a0
+ z 1 = λ 1 c 1 x1 + λ 2 c 2 x2 + . . . + λ n c n xn | · a 1
..
.
+ z n = λn1 c1 x1 + λn2 c2 x2 + . . . + λnn cn xn | · 1
(3.6.81)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
74 Chapter 3. Matrix Calculus

Chapter 4

Vector and Tensor Algebra

For example S IMMONDS [12], H ALMOS [6], M ATTHEWS [11], and A BRAHAM, M ARSDEN, and
R ATIU [1].
And in german DE B OER [3], S TEIN ET AL . [13], and I BEN [7].

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 75


76 Chapter 4. Vector and Tensor Algebra Chapter Table of Contents 77

4.5.5 The Symmetric and Antisymmetric (Skew) Tensor . . . . . . . . . . . 115


Chapter Table of Contents 4.5.6 The Inverse of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.5.7 The Orthogonal Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.1 Index Notation and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.5.8 The Polar Decomposition of a Tensor . . . . . . . . . . . . . . . . . . 117
4.1.1 The Summation Convention . . . . . . . . . . . . . . . . . . . . . . . 78
4.5.9 The Physical Components of a Tensor . . . . . . . . . . . . . . . . . . 118
4.1.2 The Kronecker delta . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.5.10 The Isotropic Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.1.3 The Covariant Basis and Metric Coef£cients . . . . . . . . . . . . . . 80
4.6 The Principal Axes of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.1.4 The Contravariant Basis and Metric Coef£cients . . . . . . . . . . . . 81
4.6.1 Introduction to the Problem . . . . . . . . . . . . . . . . . . . . . . . 120
4.1.5 Raising and Lowering of an Index . . . . . . . . . . . . . . . . . . . . 82
4.6.2 Components in a Cartesian Basis . . . . . . . . . . . . . . . . . . . . . 122
4.1.6 Relations between Co- and Contravariant Metric Coef£cients . . . . . . 83
4.6.3 Components in a General Basis . . . . . . . . . . . . . . . . . . . . . 122
4.1.7 Co- and Contravariant Coordinates of a Vector . . . . . . . . . . . . . 84
4.6.4 Characteristic Polynomial and Invariants . . . . . . . . . . . . . . . . 123
4.2 Products of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.6.5 Principal Axes and Eigenvalues of Symmetric Tensors . . . . . . . . . 124
4.2.1 The Scalar Product or Inner Product of Vectors . . . . . . . . . . . . . 85
4.6.6 Real Eigenvalues of a Symmetric Tensors . . . . . . . . . . . . . . . . 124
4.2.2 De£nition of the Cross Product of Base Vectors . . . . . . . . . . . . . 87
4.6.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.2.3 The Permutation Symbol in Cartesian Coordinates . . . . . . . . . . . 87
4.6.8 The Eigenvalue Problem in a General Basis . . . . . . . . . . . . . . . 125
4.2.4 De£nition of the Scalar Triple Product of Base Vectors . . . . . . . . . 88
4.7 Higher Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.2.5 Introduction of the Determinant with the Permutation Symbol . . . . . 89
4.7.1 Review on Second Order Tensor . . . . . . . . . . . . . . . . . . . . . 127
4.2.6 Cross Product and Scalar Triple Product of Arbitrary Vectors . . . . . . 90
4.7.2 Introduction of a Third Order Tensor . . . . . . . . . . . . . . . . . . . 127
4.2.7 The General Components of the Permutation Symbol . . . . . . . . . . 92
4.7.3 The Complete Permutation Tensor . . . . . . . . . . . . . . . . . . . . 128
4.2.8 Relations between the Permutation Symbols . . . . . . . . . . . . . . . 93
4.7.4 Introduction of a Fourth Order Tensor . . . . . . . . . . . . . . . . . . 128
4.2.9 The Dyadic Product or the Direct Product of Vectors . . . . . . . . . . 94
4.7.5 Tensors of Various Orders . . . . . . . . . . . . . . . . . . . . . . . . 129
4.3 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.3.1 Introduction of a Second Order Tensor . . . . . . . . . . . . . . . . . . 96
4.3.2 The De£nition of a Second Order Tensor . . . . . . . . . . . . . . . . 97
4.3.3 The Complete Second Order Tensor . . . . . . . . . . . . . . . . . . . 99
4.4 Transformations and Products of Tensors . . . . . . . . . . . . . . . . . . . 101
4.4.1 The Transformation of Base Vectors . . . . . . . . . . . . . . . . . . . 101
4.4.2 Collection of Transformations of Basis . . . . . . . . . . . . . . . . . 103
4.4.3 The Tensor Product of Second Order Tensors . . . . . . . . . . . . . . 105
4.4.4 The Scalar Product or Inner Product of Tensors . . . . . . . . . . . . . 110
4.5 Special Tensors and Operators . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.5.1 The Determinant of a Tensor in Cartesian Coordinates . . . . . . . . . 112
4.5.2 The Trace of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.5.3 The Volumetric and Deviator Tensor . . . . . . . . . . . . . . . . . . . 113
4.5.4 The Transpose of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . 114

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
78 Chapter 4. Vector and Tensor Algebra 4.1. Index Notation and Basis 79

4.1 Index Notation and Basis If the indices of two terms are written in brackets it is forbidden to sum these terms
3
X
4.1.1 The Summation Convention v(m) g(m) 6= v m gm . (4.1.6)
m=1
For a product the summation convention, invented by Einstein, holds if one index of summation
is a superscript index and the other one is a subscript index. This repeated index implies that the
term is to be summed from i = 1 to i = n in general, 4.1.2 The Kronecker delta
n
X The Kronecker delta is de£ned by
ai bi = a 1 b 1 + a 2 b 2 + . . . + a n b n = a i b i , (4.1.1) (
i=1 1 if i=j
δ ij = δji = δ ij = δij = . (4.1.7)
0 if i 6= j
and for the special case of n = 3 like this,
3
An index i, for example in a 3-dimensional space, is substituted with another index j by multi-
X
j 1 2 3
v gj = v g1 + v g2 + v g3 = v gj , j
(4.1.2) plication with the Kronecker delta,
j=1 3
X
δij vj = δij vj = δi1 v1 + δi2 v2 + δi3 v3 = vi , (4.1.8)
or even for two suf£ces
j=1
3 X
X 3
gik ui v k = g11 u1 v 1 + g12 u1 v 2 + g13 u1 v 3 or with a summation over two indices,
i=1 k=1
3 X
3
+ g21 u2 v 1 + g22 u2 v 2 + g23 u2 v 3 X
δij v i uj = δij v i uj
+ g31 u3 v 1 + g32 u3 v 2 + g33 u3 v 3 = gik ui v k . (4.1.3) i=1 j=1

= δ11 v 1 u1 + δ12 v 1 u2 + δ13 v 1 u3


The repeated index of summation is also called the dummy index. This means that changing the
index i to j or k or any other symbol does not infect the value of the sum. But is important to + δ21 v 2 u1 + δ22 v 2 u2 + δ23 v 2 u3
notice, that it is not allowed to repeat an index more than twice! Another important thing to note + δ31 v 3 u1 + δ32 v 3 u2 + δ33 v 3 u3
about index notation is the use of the free indices. The free indices in every term and on both = 1 · v 1 u1 + 0 · v 1 u2 + 0 · v 1 u3
sides of an equation must match. For that reason the addition of two vectors could be written in + 0 · v 2 u1 + 1 · v 2 u2 + 0 · v 2 u3
different ways, where a, b and c are vectors in the vector space V with the dimension n, and the
ai , bi and ci are their components, + 0 · v 3 u1 + 0 · v 3 u2 + 1 · v 3 u3
 δij v i uj = v 1 u1 + v 2 u2 + v 3 u3 = v i ui ⇔ v · u, (4.1.9)

 a1 + b 1 = c 1


a 2 + b 2 = c 2 or just for a Kronecker delta with two equal indices,
a + b = c ⇔ a i + bi = c i ⇔ , ∀a, b, c ∈ V. (4.1.4)
 ...

 3
X

δjj = δjj = δ11 + δ22 + δ33 = 3,
a + b = c
n n n (4.1.10)
j=1
For the special case of Cartesian coordinates there holds another important convention. In this
case it is allowed to sum repeated subscript or superscript indices, in general for a Cartesian and for the scalar product of two Kronecker deltas,
coordinate system the subscript index is preferred,
3
X
3
X δij δjk = δij δjk = δi1 δ1k + δi2 δ1k + δi3 δ3k = δik . (4.1.11)
xi ei = x i e i . (4.1.5)
j=1
i=1

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
80 Chapter 4. Vector and Tensor Algebra 4.1. Index Notation and Basis 81

For the special case of Cartesian coordinates the Kronecker delta is identi£ed by the unit matrix 4.1.4 The Contravariant Basis and Metric Coef£cients
or identity matrix,   Assume a new reciprocal basis to the covariant base vectors gi by introducing the
1 0 0
[δij ] = 0 1 0 . (4.1.12) gk , contravariant base vectors,
0 0 1
in the same space like the covariant base vectors. This contravariant base vectors are de£ned by
4.1.3 The Covariant Basis and Metric Coef£cients (
k k 1 i=k
gi · g = δ i = , (4.1.21)
In an n-dimensional af£ne vector space R naf f ↔ En = V a vector v is given by 0 i 6= k

v = v i gi , with v, gi ∈ V , and i = 1, 2, 3. (4.1.13) and with the covariant coordinates vi the vector v is given by

The vectors gi are choosen as linear independent, i.e. they are a basis. If the index i is an v = vi g i , with v, gi ∈ V , and i = 1, . . . , n. (4.1.22)
subscript index, the de£nitions
For example in the 2-dimensional vector space E2
gi , covariant base vectors, (4.1.14)

and g2 O º g2

vi , contravariant coordinates, (4.1.15)


g1
of v with respect to the gi , hold. The v 1 g1 , v 2 g2 , v 3 g3 are called the components of v. The :
Scalar product of the base vectors gi and gk is de£ned by

gi · gk = gik (4.1.16) z
g1
= gk · gi = gki (4.1.17)
gik = gki (4.1.18)

and these coef£cients are called the Figure 4.1: Example of co- and contravariant base vectors in E 2 .

gik = gki , covariant metric coef£cients.


g1 · g 2 = 0 Ã g1 ⊥g2 , (4.1.23)
The metric coef£cients are symmetric, because of the commutativity of the scalar product g i ·
gk = gk · gi . The determinant of the matrix of the covariant metric coef£cients g ik , g2 · g 1 = 0 Ã g2 ⊥g1 , (4.1.24)
g1 · g 1 = 1, (4.1.25)
g = det [gik ] (4.1.19) g2 · g 2 = 1. (4.1.26)

is nonzero, if and only if the gi form a basis. For the Cartesian basis the metric coef£cients The scalar product of the contravariant base vectors g i ,
vanish except the ones for i = k and the coef£cient matrix becomes the identity matrix or the
Kronecker delta gi · gk = g ik (4.1.27)
(
1 i=k = gk · gi = g ki (4.1.28)
ei · ek = δik = . (4.1.20) ik ki
0 i 6= k g =g (4.1.29)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
82 Chapter 4. Vector and Tensor Algebra 4.1. Index Notation and Basis 83

Lemma 4.1. The covariant base vectors transform with the contravariant metric coef£cients
into the contravariant base vectors.
e2 = e 2 6
gk = g ki gi Raising an index with the contravariant metric coef£cients.
The same argumentation for the covariant metric coef£cients starts with
gk = Akm gm , (4.1.36)
gk · gi = Akm δim , (4.1.37)
-
gki = Aki , (4.1.38)
e1 = e 1
and £nally implies
gk = gki gi . (4.1.39)
ª As a rule of thumb:
e3 = e 3
Lemma 4.2. The contravariant base vectors transform with the covariant metric coef£cients
into the covariant base vectors.
Figure 4.2: Special case of a Cartesian basis. gk = gki gi Lowering an index with the covariant metric coef£cients.

4.1.6 Relations between Co- and Contravariant Metric Coef£cients


de£nes the
g ik = g ki , contravariant metric coef£cients. Both sides of the transformation formula
For the special case of Cartesian coordinates and an orthonormal basis e i co- and contravariant gk = g km gm , (4.1.40)
base vectors are equal. For that reason it is not necessary to differentiate between indices as
are multiplied with the vector gi
subscript or superscript indices. From now on Cartesian base vectors and Cartesian coordinates
get only indicies as subscript indices, gk · gi = g km gm · gi . (4.1.41)
u = u i ei , or u = uj ej . (4.1.30) Comparing this with the de£nitions of the Kronecker delta (4.1.7) and of the metric coef£cients
(4.1.16) and (4.1.27) leads to
4.1.5 Raising and Lowering of an Index δik = g km gmi . (4.1.42)
−1
If the vectors gi , gm and gk are in the same space V, it must be possible to describe g k by a Like in the expression A A = 1 co- und contravariant metric coef£cients are inverse to each
product of gm and some coef£cient like A km , other. In matrix notation equation (4.1.42) denotes
£ ¤
gk = Akm gm . (4.1.31) 1 = g km [gmi ] , (4.1.43)
£ km ¤ −1
Both sides of the equations are multiplied with g i , g = [gmi ] , (4.1.44)
and for the determinants
gk · gi = Akm gm · gi , (4.1.32) £ ¤ 1
det g ik = . (4.1.45)
det [gik ]
and with the de£nition of the Kronecker delta,
With the de£nition of the determinant, equation (4.1.19), the determinant of the contravariant
g ki = Akm δm
i
, (4.1.33) metric coef£cients gives
det [gik ] = g, (4.1.46)
g ki = Aki . (4.1.34)
and the determinant of the contravariant metric coef£cients
The result is the following relation between co- and contravariant base vectors £ ¤ 1
det g ik = . (4.1.47)
gk = g ki gi . (4.1.35) g

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
84 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 85

4.1.7 Co- and Contravariant Coordinates of a Vector 4.2 Products of Vectors


The vector v ∈ V is represented by the two expressions v = v gi and v = vk g . Comparing
i k

these expressions, with respect to equation (4.1.39) gi = gik gk , leads to


4.2.1 The Scalar Product or Inner Product of Vectors
The scalar product of two vectors u and v ∈ V is denoted by
v i gik gk = vk gk ⇒ vk = gik v i . (4.1.48)
α =< u | v >≡ u · v , α ∈ R, (4.2.1)
After changing the indices of the symmetric covariant metric coef£cient, like in equation (4.1.29),
the transformation from contravariant coordinates to covariant coordinates denotes like this and also called the inner product or dot product of vectors. The vectors u and v are represented
by
vk = gki v i . (4.1.49)
u = u i gi and v = v i gi , (4.2.2)
In the same way comparing the contravariant vector g k = g ik gi with the equations (4.1.35) and
(4.1.18) gives with respect to the covariant base vectors gi ∈ V and i = 1, . . . , n or by
v i gi = vk g ki gi ⇒ v i = g ki vk , (4.1.50)
u = ui g i and v = vi gi . (4.2.3)
and after changing the indices
i ik
v = g vk . (4.1.51) w.r.t. the contravariant base vectors g j and j = 1, . . . , n. By combining these representations the
Lemma 4.3. The covariant coordinates transform like the covariant base vectors with the con- scalar product of two vectors could be written in four variations
travariant metric coef£cients and vice versa. In index notation the transformation for the covari-
ant coordinates and the covariant base vectors looks like this α = u · v = ui v j gi · gj = ui v j gij = ui vi , (4.2.4)
= ui vj gi · gj = ui vj g ij = ui v i , (4.2.5)
v i = g ik vk gi = g ik gk , (4.1.52)
= ui vj gi · gj = ui vj δi.j = ui vi , (4.2.6)
j i
and for the contravariant coordinates and the contravariant base vectors = ui v g · g j = ui v j δ.ji i
= ui v . (4.2.7)

The Euclidean norm is the connection between elements of the same dimension in a vector space.
vk = gki v i gk = gki gi . (4.1.53) The absolute values of the vectors u and v are represented by

|u| = kuk2 = u · u, (4.2.8)

|v| = kvk2 = v · v. (4.2.9)

The scalar product or inner product of two vectors in V is a bilinear mapping from two vectors
to α ∈ R.
Theorem 4.1. In the 3-dimensional Euclidean vector space E 3 one important application of the
scalar product is the de£nition of the work as the force times the distance moved in the direction
opposite to the force,

à Work = Force in direction of the distance ∗ Distance


or α = f · d . (4.2.10)

Theorem 4.2. The scalar product in 3-dimensional Euclidean vector space E 3 is written u · v
and is de£ned as the product of the absolute values of the two vectors and the cosine of the angle
between them,
α = u · v := |u| |v| cos ϕ. (4.2.11)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
86 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 87

3-dimensional Euclidean vector space E3 ,


   
6 1
Á 3 · 2 =6 · 1 + 3 · 2 + 7 · 3 = 33,
7 3
=ui v j gij = u1 v j g1j + u2 v j g2j + u3 v j g3j ,
=6 · 1 · 1 + 6 · 2 · 0 + 6 · 3 · 0,
v≡f
: + 3 · 1 · 0 + 3 · 2 · 1 + 3 · 3 · 0,
: eu
:
+ 7 · 1 · 0 + 7 · 2 · 0 + 7 · 3 · 1 = 33.
ϕ u≡d

|v| cos ϕ 4.2.2 De£nition of the Cross Product of Base Vectors


The cross product, also called the vector product, or the outer product, is only de£ned in the
3-dimensional Euclidean vector space E3 . The cross product of two arbitrary, linear independent
Figure 4.3: Projection of a vector v on the dircetion of the vector u.
covariant base vectors gi , gj ∈ E3 implies another vector gk ∈ E3 and is introduced by

gi × gj = αgk , (4.2.18)
The quantity |v| cos ϕ represents in the 3-dimensional Euclidean vector space E the projection
3

of the vector v in the direction of vector u. The unit vector in direction of vector u is given by with the conditions
u
eu = . (4.2.12) i 6= j 6= k,
|u|
i, j, k = 1, 2, 3 , or another even permutation of i, j, k.
Therefore the cosine of the angle is given by
u·v 4.2.3 The Permutation Symbol in Cartesian Coordinates
cos ϕ = . (4.2.13)
|u| |v|
The cross products of the Cartesian base vectors ei in the 3-dimensional Euclidean vector space
The absolute value of a vector is its Euclidean norm and is computed by E3 are given by
√ √
|u| = u · u , and |v| = v · v. (4.2.14) e1 × e 2 = e3 = e3 , (4.2.19)
e2 × e 3 = e1 = e1 , (4.2.20)
This formula rewritten with the base vectors gi and gi simpli£es in index notation to
e3 × e 1 = e2 = e2 , (4.2.21)
q
p p e2 × e 1 = −e3 = −e3 , (4.2.22)
|u| = ui gi · uk gk = ui uk δik = ui ui , (4.2.15)
p e3 × e 2 = −e1 = −e1 , (4.2.23)
|v| = v i vi . (4.2.16) e1 × e 3 = −e2 = −e2 . (4.2.24)
The cosine between two vectors in the 3-dimensional Euclidean vector space E 3 is de£ned by The Cartesian components of a permutation tensor , or just the permutation symbols, are de£ned
i
u vi ui v i by 
cosϕ = p √ =p √ . (4.2.17)
uj uj v k v k uj uj v k v k +1 if (i, j, k) is an even permutation of (1, 2, 3),

eijk = −1 if (i, j, k) is an odd permutation of (1, 2, 3), (4.2.25)

For example the scalar product of two vectors w.r.t. the Cartesian basis g i = ei = ei in the 
0 if two or more indices are equal.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
88 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 89

Thus, returning to equations (4.2.19)-(4.2.24), the cross products of the Cartesian base vectors 4.2.5 Introduction of the Determinant with the Permutation Symbol
could be described by the permutation symbols like this,
The scalar quantity α in the section above could also be described by the square root of the
ei × ej = eijk e . k
(4.2.26) determinant of the covariant metric coef£cients,
1 √
α = (det gij ) 2 = g. (4.2.34)
For example
The determinant of a 3 × 3 matrix could be represented by the permutations symbols e ijk ,
e1 × e2 = e121 · e1 + e122 · e2 + e123 · e3 = 0 · e1 + 0 · e2 + 1 · e3 = e3 ¯ ¯
¯a11 a12 a13 ¯
e1 × e3 = e131 · e1 + e132 · e2 + e133 · e3 = 0 · e1 + (−1) · e2 + 0 · e3 = −e2 . ¯ ¯
α = det [amn ] = ¯¯a21 a22 a23 ¯¯ = a1i · a2j · a3k · eijk . (4.2.35)
¯a31 a32 a33 ¯
4.2.4 De£nition of the Scalar Triple Product of Base Vectors
Computing the determinant by expanding about the £rst row implies
Starting again with the cross product of base vectors, see equation (4.2.18), ¯ ¯ ¯ ¯ ¯ ¯
¯a a ¯ ¯a a ¯ ¯a a ¯
α = a11 · ¯¯ 22 23 ¯¯ − a12 · ¯¯ 21 23 ¯¯ + a13 · ¯¯ 21 22 ¯¯
gi × gj = αgk , (4.2.27) a32 a33 a31 a33 a31 a32
i 6= j 6= k, and £nally
i, j, k = 1, 2, 3 , or another even permutation of i, j, k.
α = a11 · a22 · a33 − a11 · a32 · a23
The gk are the contravariant base vectors and the scalar quantity α is computed by multiplication − a12 · a21 · a33 + a12 · a31 · a23
of equation (4.2.18) with the covariant base vector gk ,
+ a13 · a21 · a32 − a13 · a31 · a22 . (4.2.36)
k
(gi × gj ) · gk = αg · gk , (4.2.28) The alternative way with the permutation symbol is given by
[g1 , g2 , g3 ] = αδkk = 3α. (4.2.29)
α = a11 · a22 · a33 · e123 + a11 · a23 · a32 · e132
This result is the so called scalar triple product of the base vectors + a12 · a23 · a31 · e231 + a12 · a21 · a33 · e213
+ a13 · a21 · a32 · e312 + a13 · a22 · a31 · e321 ,
α = [g1 , g2 , g3 ] . (4.2.30)
and after inserting the values of the various permutation symbols,
This scalar triple product α of the base vectors gi for i = 1, 2, 3 represents the volume of the
parallelepiped formed by the three vectors gi for i = 1, 2, 3. Comparing equations (4.2.28) and = a11 · a22 · a33 · 1 + a11 · a23 · a32 · (−1)
(4.2.29) implies for contravariant base vectors + a12 · a23 · a31 · 1 + a12 · a21 · a33 · (−1)
+ a13 · a21 · a32 · 1 + a13 · a22 · a31 · (−1)
gi × g j
gk = , (4.2.31)
[g1 , g2 , g3 ] and £nally the result is equal to the £rst way of computing the determinant, see equation (4.2.36),
and for covariant base vectors
α = a11 · a22 · a33 − a11 · a23 · a32
gi × gj
gk = . (4.2.32) + a12 · a23 · a31 − a12 · a21 · a33
[g1 , g2 , g3 ]
+ a13 · a21 · a32 − a13 · a22 · a31 . (4.2.37)
Furthermore the scalar product of two scalar triple products of base vectors is given by
Equations (4.2.35) can be written with contravariant elements, too,
£ ¤ 1
[g1 , g2 , g3 ] · g1 , g2 , g3 = α · = 1. (4.2.33)
α α∗ = a1i · a2j · a3k · eijk . (4.2.38)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
90 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 91

The matrix of the covariant metric coef£cients is the inverse of the matrix of the contravariant and
metric coef£cients and vice versa,
[d, e, f ] = (d × e) · f . (4.2.51)
gij · g jk = δik , (4.2.39)
£ jk
¤ ¡ £ ¤¢ £ ¤
det gij · g = det [gij ] g jk = det δik = 1. (4.2.40) and the £rst one of this scalar triple product is given by

The product rule of determinants (a × b) · c = [g1 , g2 , g3 ] ai bj eijk gk · cr gr = [g1 , g2 , g3 ] ai bj eijk δrk cr


¡ £ ¤¢ £ ¤ = [g1 , g2 , g3 ] ai bj ck eijk
det [gij ] g jk = det [gij ] · det g jk (4.2.41) ¯ 1 2 3¯
£ 1 2 3 ¤ ¯a1 a2 a3 ¯
¯ ¯
simpli£es for this special case = g , g , g ¯¯ b b b ¯¯
1 ¯ c1 c2 c3 ¯
1=g· , (4.2.42) ¯ 1 1 1¯ ¯ 1 1 1¯
g ¯a b c ¯ ¯a b c ¯
¯ 2 2 2¯ ¯ ¯
and £nally = [g1 , g2 , g3 ] ¯¯a b c ¯¯ = α ¯¯a2 b2 c2 ¯¯ . (4.2.52)
£ ¤ 1 3
¯a b c ¯3 3 3 3 3
¯a b c ¯
det [gij ] = g det g ij = . (4.2.43)
g
For this reason the determinants of the matrix of the metric coef£cients are represented with the The same formula written with covariant components and contravariant base vectors,
permutation symbols, see equations (4.2.35) and (4.2.43), like this £ ¤ 1
(a × b) · c = g1 , g2 , g3 ai bj ck eijk = ai bj ck eijk
ijk
g = g1i · g2j · g3k · e = det [gij ] , (4.2.44) ¯ ¯ α
£ 1 2 3 ¤ ¯ a 1 a2 a 3 ¯
¯ ¯
1 £ ¤
= g 1i · g 2j · g 3k · eijk = det g ij . (4.2.45) = g , g , g ¯ b1 b2 b3 ¯¯
¯
g ¯ c1 c2 c3 ¯
¯ 1 1 1¯
¯a b c ¯
4.2.6 Cross Product and Scalar Triple Product of Arbitrary Vectors ¯ ¯
= [g1 , g2 , g3 ] ¯¯a2 b2 c2 ¯¯ . (4.2.53)
The vectors a up to f are written in the 3-dimensional Euclidean vector space E 3 with the base ¯ a3 b 3 c 3 ¯
vectors gi and gi ,
The product
a = a i gi d = d i gi , (4.2.46) P = [a, b, c] [d, e, f ] (4.2.54)
b = b i gi e = e i gi , (4.2.47) is therefore with the equations (4.2.52), (4.2.53) and (4.2.46) up to (4.2.48)
c = c i gi f = f i gi . (4.2.48) ¯ 1 2 3¯ ¯ ¯
£ 1 2 3 ¤ ¯a1 a2 a3 ¯ ¯d1 e1 f1 ¯
¯ ¯¯ ¯
1
The cross product (4.2.26) rewritten with the formulae for the scalar triple product (4.2.28) - P = [g1 , g2 , g3 ] g , g , g ¯¯ b b b ¯¯ ¯¯d2 e2 f2 ¯¯ = α |A| |B| = |A| |B| . (4.2.55)
(4.2.30), ¯ c1 c2 c3 ¯ ¯d 3 e 3 f3 ¯ α

a × b = ai gi × bj gj = ai bj eijk [g1 , g2 , g3 ] gk , The element (1, 1) of the product matrix A B with respect to the product rule of determinants
¯ 1 2 3¯ ¯ 1 2 3¯
¯a a a ¯ ¯g g g ¯ det A det B = det (A B) is given by
¯ 1 2 3¯ ¯ ¯
a × b = [g1 , g2 , g3 ] ¯¯ b b b ¯¯ = [g1 , g2 , g3 ] ¯¯ a1 a2 a3 ¯¯ . (4.2.49)
¯g1 g2 g3 ¯ ¯ b1 b2 b 3 ¯ a1 d 1 + a 2 d 2 + a 3 d 3 = a i g i · d j g j
= ai dj δij
Two scalar triple products are de£ned by
= ai di
[a, b, c] = (a × b) · c, (4.2.50) = a · d. (4.2.56)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
92 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 93

Comparing this with the product P leads to or by the contravariant ε symbol,


¯ ¯
¯a · d a · e a · f ¯  1
¯ ¯
P = [a, b, c] [d, e, f ] = ¯¯b · d b · e b · f ¯¯ , (4.2.57) + √g if (i, j, k) is an even permutation of (1, 2, 3),

¯c · d c · e c · f ¯ εijk = − √1g if (i, j, k) is an odd permutation of (1, 2, 3), (4.2.66)


0 if two or more indices are equal.
and for the scalar triple product [a, b, c] to the power two,
¯ ¯
¯a · a a · b a · c¯
2
¯ ¯ 4.2.8 Relations between the Permutation Symbols
[a, b, c] = ¯¯b · a b · b b · c¯¯ . (4.2.58)
¯c · a c · b c · c¯ Comparing equations (4.2.25) with (4.2.65) and (4.2.66) shows the relations,
√ 1
4.2.7 The General Components of the Permutation Symbol εijk = geijk , and eijk = √ εijk , (4.2.67)
g
The square value of a scalar triple product of the covariant base vectors, like in equations (4.2.58),
¯ ¯ and
¯ g1 · g 1 g 1 · g 2 g1 · g 3 ¯
2
¯ ¯ 1 √ ijk
[g1 , g2 , g3 ] = ¯g2 · g1 g2 · g2 g2 · g3 ¯¯ = |gij | = det [gij ] = g
¯ (4.2.59) εijk = √ eijk , and eijk = gε . (4.2.68)
¯ g3 · g 1 g 3 · g 2 g3 · g 3 ¯ g

reduces to The comparison of equation (4.2.44) and (4.2.35) gives



[g1 , g2 , g3 ] = g. (4.2.60)
g = |gij | = g1i g2j g3k eijk , (4.2.69)
The same for relation for the scalar triple product of the contravariant base vectors leads to
g · elmn = gli gmj gnk eijk , (4.2.70)
£ 1 2 3 ¤2 £ ¤ 1
g ,g ,g = det g ij = (4.2.61) 1 √
g g · √ εlmn = gli gmj gnk gεijk , (4.2.71)
g
£ 1 2 3¤ 1
g ,g ,g = √ . (4.2.62) εlmn = gli gmj gnk εijk , (4.2.72)
g
Equation (4.2.60) could be rewritten analogous to equation (4.2.26) and
gi × gj = [g1 , g2 , g3 ] gk
√ εlmn = g li g mj g nk εijk . (4.2.73)
= g · eijk gk ,
gi × gj = εijk gk , (4.2.63) The covariant ε symbols are converted into the contravariant ε symbols with the contravariant
metric coef£cients and vice versa. This transformation is the same as the one for tensors. The
and for the corresponding contravariant base vectors conclusion is that the ε symbols are tensors! The relation between the e and ε symbols is written
£ ¤ as follows
g i × g j = g 1 , g 2 , g 3 gk

1 g
= √ · eijk gk , eijk elmn = √ εijk εlmn , (4.2.74)
g g
gi × gj = εijk gk . (4.2.64) eijk elmn = εijk εlmn . (4.2.75)
For example the general permutation symbol could be given by the covariant ε symbol, The relation between the permutation symbols and the Kronecker delta is given by
 √
+ g if (i, j, k) is an even permutation of (1, 2, 3),
¯ i i ¯
 ¯ δl δm δni ¯
√ ¯ j ¯
εijk = − g if (i, j, k) is an odd permutation of (1, 2, 3), (4.2.65) j j ijk ijk
¯ l m δn ¯ = ε εlmn = e elmn .
¯δ δ ¯ (4.2.76)


0 if two or more indices are equal, ¯δ k δ k δ k ¯
l m n

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
94 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 95

After expanding the determinant and setting k = n, The proof that the dyadic product is a tensor starts with the assumption, see equations (T4) and
ijk i j
(T5),
ε εlmk = δli δm
j
− δm δl , (4.2.77) (a ⊗ b) (αu + βv) = α (a ⊗ b) · u + β (a ⊗ b) · v. (4.2.86)
and if i = l and j = m ∗
Equation (4.2.86) rewritten with the mapping T is given by
εijk εijn = 2δnk , (4.2.78)
∗ ∗ ∗
and if all three indices are equal T (αu + βv) = αT (u) + β T (v) . (4.2.87)
ijk ijk
ε εijk = e eijk = 2δkk = 6. (4.2.79) ∗
With the de£nitions of the mapping T it follows, that
4.2.9 The Dyadic Product or the Direct Product of Vectors ∗
T (αu + βv) = (a ⊗ b) (αu + βv) = a [b · (αu + βv)]
The dyadic product of two vectors a and b ∈ V de£nes a so called simple second order tensor = a [αb · u + βb · v]
with rank = 1 in the tensor space V ⊗ V∗ over the vector space V by
= α [a (b · u)] + β [a (b · v)]
∗ ∗ ∗ ∗
T=a⊗b , and T ∈ V ⊗ V∗ . (4.2.80) = αT (u) + β T (v) .
This tensor describes a linear mapping of the vector v ∈ V with the scalar product by The vectors a and b are represented by the base vectors gi (covariant) and gj (contravariant)

T · v = (a ⊗ b) · v = a (b · v) . (4.2.81) a = a i gi , b = bj g j , and gi , gj ∈ V. (4.2.88)
The dyadic product a ⊗ b could be represented by a matrix, for example with a, b ∈ R and 3

T ∈ R 3 ⊗ R3 , The dyadic product i.e. the mapping T is de£ned by
  ∗
a1 ∗

T
T = a b =  a2 
£ ¤
b1 b2 b3 1×3 (4.2.82) T = a ⊗ b = ai bj gi ⊗ gj = T ij gi ⊗ gj , (4.2.89)
a3 3×1
  and by
a 1 b 1 a1 b2 a1 b 3 gi ⊗ g j the dyadic product of the base vectors,
=  a2 b 1 a2 b2 a2 b 3  . (4.2.83)
with the conditions
a3 b1 a3 b2 a3 b3 3×3

µ ∗ ¶
∗ ∗ det T ij = 0 , r T ij = 1 , and rank = 1. (4.2.90)
The rank of this mapping is rank = 1, i.e. det T = 0 and det T i = 0 for i = 1, 2, 3. The
(3×3) (2×2)
∗ ∗ ∗
mapping T v denotes in matrix notation The mapping T maps the vector v = v k gk onto the vector w,
3

P ∗ ∗ ¡
∗ ¢
    a1 i=1 bi vi  w = T · v = T ij gi ⊗ gj · v k gk
a 1 b 1 a1 b 2 a1 b 3 v1 
 P 3

 ∗ ¡ ¢
 a 2 b 1 a2 b 2 a2 b 3   v 2  =  a
 2 b v
i i ,
 (4.2.84) = T ij v k gi gj · gk
a 3 b 1 a3 b 2 a3 b 3 v3  i=1 
3 ∗
 P 
a3 bi v i = T ij v k gi δ jk ,
i=1 ∗ ∗

or w = T ij v j gi = wi gi . (4.2.91)

¡ ¢ ∗ ¡ ¢
a bT v = T v = a bT v . (4.2.85)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
96 Chapter 4. Vector and Tensor Algebra 4.3. Tensors 97

4.3 Tensors
dF(n) = t(n) dA(n)
4.3.1 Introduction of a Second Order Tensor x3 6
±
n
With the de£nition of linear mappings f in chapter (2.8), the de£niton of vector spaces of linear ¸ n(1)
mappings L over the vector space V, and the de£nition of dyads it is possible to de£ne the second
dF (2) (2)
= t dA(2) µ
order tensor. The original de£nition of a tensor was the description of a stress state at a point
and time in a continuum, e.g. a ¤uid or a solid, given by Cauchy. The stress tensor or the Cauchy I dF(1) = t(1) dA(1)
stress tensor T at a point P assigns a stress vector σ (P ) to an arbitrarily oriented section, 1
given by a normal vector at the point P . The resulting stress vector t (n) (P ) at an arbitrarily dA(1)
n(2) ¾
-
6 (n) x2
t (P )
dA(2) dA(n)
µn
dF(3) = t(3) dA(3)
P R
dA(3) ?
x1 n(3)
ª

Figure 4.4: Resulting stress vector.


Figure 4.5: Resulting stress vector.
oriented section, described by a normal vector n at a point P , in a rigid body loaded by an
equilibrium system of external forces could have an arbitrary direction! Because the equilibrium
conditions hold only for forces and not for stresses, an equlibirum system of forces is established • The spaces R3 and E3 are homeomorphic, i.e. for all vectors x ∈ R3 and v ∈ E3 the same
at an in£nitesimal tetrahedron. This tetrahedron will have four in£nitesimal section surfaces, rules and axioms hold. For this reason it is suf£cient to have a look at the vector space R 3 .
too. If the section surface is rotated, then the element of reference (the vector of direction)
• Also the spaces Rn , En and V are homeomorphic, but with n 6= 3 the usual cross product
will be transformed and the direction of stress will be transformed, too. Comparing this with the
will not hold. For this reason the following de£nitions are made for the general vector
transformation of stresses yields to products of cosines, which lead to quantities with two indices.
space V, but most of the examples are given in the 3-dimensional Euclidean vector space
The stress state at a point could not be described by one or two vectors, but by a combination
E3 . In this space the cross product holds, and this space is the visual space.
of three vectors t(1) , t(2) , and t(3) . The stress tensor T for the equilibrium conditions for a
in£nitesimal tetrahedron, given by the three stress vectors t 1 , t2 , and t3 , assigns to every direction
a unique resulting stress vector t(n) . 4.3.2 The De£nition of a Second Order Tensor
A linear mapping f = T of an (Euclidean) vector space V into itself or into its dual space V ∗
4.3.1.0.20 Remarks is called a second order tensor. The action of a linear mapping T on a vector v is written like a
• The scalar product F · n = Fn ; |n| = 1 projects F · cos ϕ on the direction of n, and the "dot"-product or multiplication, and in most cases the "dot"is not written any more,
result is a scalar quantity.
T · v = Tv. (4.3.1)
• The cross product r×F = MA establishs a vector of momentum at a point A in the normal
direction of the plane rA , F and perpendicular to F, too. The de£nitions and rules for linear spaces in chapter (2.4), i.e. the axioms of vector space (S1)
up to (S8) are rewritten for tensors T ∈ V ⊗ V.
• The dyadic product a ⊗ b = T assigns a second order tensor T to a pair of vectors a and
b.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
98 Chapter 4. Vector and Tensor Algebra 4.3. Tensors 99

4.3.2.0.21 Linearity for the Vectors. 4.3.3 The Complete Second Order Tensor
1 . Axiom of Second Order Tensors. The tensor (the linear mapping) T ∈ V ⊗ V maps the The simple second order tensor T ∈ V ⊗ V is de£ned as a linear combination of n dyads and its
vector u ∈ V onto the same space V, rank is n,
T (u) = T · u = Tu = v ; ∀u ∈ V ; v ∈ V. (T1) n n
X ∗ X
This mapping is the same like the mapping of a vector with a quadratic matrix in a space with T= Ti = ai ⊗ b i , (4.3.2)
i=1 i=1
Cartesian coordinates.
T = a ⊗ bi = ai ⊗ bi ,
i
(4.3.3)
2 . Axiom of Second Order Tensors. The action of the zero tensor 0 on any vector u maps the
vector on the zero vector, and det T 6= 0 , if the vectors ai and bi are linearly independent.
0 · u = 0u = 0 ; u, 0 ∈ V. (T2)
If the vectors ai and bi are represented with the base vectors gi and gi ∈ V like this,
3 . Axiom of Second Order Tensors. The unit tensor 1 sends any vector into itself,
1 · u = 1u = u ; u ∈ V, 1 ∈ V ⊗ V. (T3) ai = aij gj ; bi = bil gl , (4.3.4)

4 . Axiom of Second Order Tensors. The multiplication by a tensor is distributive with respect then the second order tensor is given by
to vector addition,
T (u + v) = Tu + Tv ; ∀u, v ∈ V. (T4) T = aij bil gj ⊗ gl , (4.3.5)
5 . Axiom of Second Order Tensors. If the vector u is multiplied by a scalar, then the linear
mapping is denoted by and £nally the complete second order tensor is given by
T (αu) = αTu ; ∀u ∈ V, α ∈ R . (T5)
T = T jl gj ⊗ gl the mixed formulation of a second order tensor. (4.3.6)
4.3.2.0.22 Linearity for the Tensors.
6 . Axiom of Second Order Tensors. The multiplication with the sum of tensors of the same The dyadic product of the base vectors includes one co- and one contravariant base vector. The
space is distributive, mixed components T jl of the tensor in mixed formulation are written with one co- and one con-
travariant index, too,
(T1 + T2 ) · u = T1 · u + T2 · u ; ∀u ∈ V, T1 , T2 ∈ V ⊗ V. (T6) det T jl 6= 0. (4.3.7)
7 . Axiom of Second Order Tensors. The multiplication of a tensor by a scalar is linear, like If the contravariant base vector is transformed with the metric coef£cient ,
in equation (T5) the multiplication of a vector by a scalar,
gl = g lk gk , (4.3.8)
(αT) · u = T · (αu) ; ∀u ∈ V, α ∈ R . (T7)
8 . Axiom of Second Order Tensors. The action of tensors on a vector is associative, the tensor T changes, like

T1 (T2 · u) = (T1 T2 ) · u = T · u, (T8) T = T jl gj ⊗ g lk gk , (4.3.9)


but like in matrix calculus not commutative, i.e. T1 T2 6= T2 T1 . The "product"T1 T2 of the T = T jl g lk gj ⊗ gk , (4.3.10)
tensors is also called a "composition"of the linear mappings T 1 , T2 .
9 . Axiom of Second Order Tensors. The inverse of a tensor T−1 is de£ned by and the result is

v = T · u ⇔ u = T−1 v, (T9) the tensor with covariant base vectors


T = T jk gj ⊗ gk (4.3.11)
and contravariant coordinates.
and it exists, if and only if T is linear independent, i.e. det T 6= 0.
10 . Axiom of Second Order Tensors. The transpose of the transpose is the tensor itself, The transformation of a covariant base vector in a contravariant base vector,
¡ T ¢T
T = T. (T10) gj = gjk gk , (4.3.12)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
100 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 101

implies 4.4 Transformations and Products of Tensors


T = T jl gjk gk ⊗ gl , (4.3.13) 4.4.1 The Transformation of Base Vectors
the tensor with contravariant base vectors
k
T = Tkl g ⊗ g l
(4.3.14) A vector v with v ∈ V is given by the covariant basis gi and i = 1, . . . , n and afterwards in
and covariant coordinates. another covariant basis gi and i = 1, . . . , n. For example this case describes the situation of a
The action of the tensor T ∈ V ⊗ V on the vector solid body with different con£gurations of deformation and a tangent basis, which moves along
the coordinate curves. And the same de£nitions are made with a contravariant basis g i and a
g = v k gk ∈ V, (4.3.15) transformed contravariant basis g i . Then the representations of the vector v are

creates the vector w ∈ V, and this one is computed by v = v i gi = v i g i = v i g i = v i g i . (4.4.1)

w = T · v = T ij (gi ⊗ gj ) · v k gk (4.3.16) The relation between the two covariant base vectors gi and gi could be written with a second
ij k ij order tensor like
= T v gi gjk = T vj gi ,
g i = A · gi . (4.4.2)
w = w i gi and wi = T ij vj . (4.3.17)
If this linear mapping exists, then the coef£cients of the transformation tensor A are given by
In the same way the other representation of the vector w is given by ¡ ¢
¡ ¢ gi = 1gi = gk ⊗ gk gi (4.4.3)
w = T · v = T i j g i ⊗ gj · v k g k
¡ k ¢
(4.3.18) = g · gi gk = Aki gk ,
j j
= Ti vk gi δjk = i
T i vj g , gi = Aki gk , and Aki = gk gi . (4.4.4)
j
w= wi gi , and w i = T i vj (4.3.19)
The complete tensor A in the mixed formulation is then de£ned by
¡ ¢
A = gk · gi gk ⊗ gi = Aki gk ⊗ gi . (4.4.5)

Insert equation (4.4.5) in (4.4.2), in order to get the transformation (4.4.4) again,
¡ ¢
gm = Aki gk ⊗ gi gm = Aki δm i
gk = Akm gk . (4.4.6)

If the inverse transformation of equation (4.4.2) exists, then it should be denoted by

gi = Agi . (4.4.7)

This existence results out of its linear independence. The "retransformation"tensor A is again
de£ned by the multiplication with the unit tensor 1
¡ ¢
gi = 1gi = gk ⊗ gk gi (4.4.8)
¡ k ¢ k
= g gi g k = A i g k ,
k k
gi = A i g k , and A i = gk gi . (4.4.9)

The transformation tensor A in the mixed representation is given by


¡ ¢ k
A = g k gi g k ⊗ g i = A i g k ⊗ g i . (4.4.10)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
102 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 103

Inserting equation (4.4.10) in (4.4.7) implies again the transformation relation (4.4.9), 4.4.2 Collection of Transformations of Basis
³ k ´ k i k There is a large number of transformation relations between the co- and contravariant basis of
gm = A i g k ⊗ g i g m = A i δ m gk = A m gk . (4.4.11)
both systems of coordinates. Transformation from the "normal basis"to the "overlined basis",
like gi à gi and gk à gk , is given by the following equations. First the relation between the
The tensor A is the inverse to A and vice versa. This is a result of equation (4.4.7) , covariant base vectors in both systems of coordinates is de£ned by
−1 ¡ ¢ ¡ ¢
A ·| gi = A · g i , (4.4.12) gi = 1gi = gk ⊗ gk gi = gk · gi gk = Aki gk . (4.4.22)
−1
à A · gi = g i . (4.4.13)
With this relationship the transformed (overlined) covariant base vectors are represented by the
covariant base vectors
Comparing this with equation (4.4.2) implies
¡ ¢
−1 gi = Agi , and A = gk · gm gk ⊗ gm = Akm gk ⊗ gm , (4.4.23)
A=A Ã A · A = 1, (4.4.14) ¡ k ¢ k
g i = g · g i gk = A i gk , (4.4.24)
and in the same way
and the transformed (overlined) covariant base vectors are represented by the contravariant base
−1
vectors,
A=A Ã A · A = 1. (4.4.15)
gi = Agi , and A = (gk ,gm ) gk ⊗ gm = Akm gk ⊗ gm , (4.4.25)
In index notation with equations (4.4.4) and (4.4.9) the relation between the "normal"and the
k k
"overlined"coef£cients of the transformation tensor is given by gi = (gk · gi ) g = Aki g . (4.4.26)
m The relation between the contravariant base vectors in both systems of coordinates is de£ned by
gi = Aki gk = Aki A k gm | ·gj , (4.4.16)
m j
δij = Aki A k δm , ¡ ¢ ¡ ¢
gi = 1gi = gk ⊗ gk gi = gk · gi gk = Bki gk . (4.4.27)
k j
δij = A iA k . (4.4.17)
With this relationship the transformed (overlined) contravariant base vectors are represented by
The transformation of contravariant basis works in the same way. If in equations (4.4.3) or the contravariant base vectors
(4.4.8) the metric tensor of covariant coef£cients is used in stead of the identity tensor, then
another representation of the transformation tensor is described by gi = Bgi , and B = (gk · gm ) gk ⊗ gm = Bkm gk ⊗ gm , (4.4.28)
¡ ¢
¡ ¢ gi = gk · gi gk = Bki gk , (4.4.29)
gm = 1gm = gik gi ⊗ gk gi (4.4.18)
= gik Akm gi , and the transformed (overlined) contravariant base vectors are represented by the covariant base
vectors,
gm = Aim gi , and Aim = gik Akm . (4.4.19)
¡ ¢
gi = Bgi , and B = gk · gm gk ⊗ gm = B km gk ⊗ gm , (4.4.30)
If the transformed covariant base vectors g m should be represented by the contravariant base ¡ ¢
vectors gi , then the complete tensor of transformation is given by gi = gk · gi gk = B ki gk . (4.4.31)

A = (gi · gk ) gi ⊗ gk = Aik gi ⊗ gk . (4.4.20) The inverse relation gi à gi , and gk à gk representing the "retransformations"from the trans-
formed (overlined) to the "normal"system of coordinates are given by the following equations.
The inverse transformation tensor T is given by an equation developed in the same way like the The inverse transformation between the covariant base vectors of both systems of coordinates is
equations (4.4.16) and (4.4.8). This inverse tensor is denoted and de£ned by denoted and de£ned by
¡ ¢ ¡ ¢ k
A = A−1 = (gi · gk ) gi ⊗ gk = Aik gi ⊗ gk . (4.4.21) gi = 1gi = gk ⊗ gk · gi = gk · gi gk = Ai gk . (4.4.32)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
104 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 105

With this relationship the covariant base vectors are represented by the transformed (overlined) 4.4.3 The Tensor Product of Second Order Tensors
covariant base vectors,
¡ ¢ The vector v is de£ned by the action of the linear mapping given by T on the vector u,
k
gi = Agi , and A = gk · gm gk ⊗ gm = Am gk ⊗ gm , (4.4.33)
¡ k ¢ v = T · u = Tu , with u, v ∈ V , and T ∈ V ⊗ V∗ . (4.4.48)
k
gi = g · g i g k = A i g k , (4.4.34)
In index notation with the covariant base vectors gi ∈ V equation (4.4.48) denotes like that
and the covariant base vectors are represented by the transformed (overlined) contravariant base ¡ ¢
v = T mk gm ⊗ gk (ur gr )
vectors,
= T mk ur (gk · gr ) gm ,
i m k m k
gi = Ag , and A = (gm · gk ) g ⊗ g = Amk g ⊗ g , (4.4.35)
and with lowering an index, see (4.1.39)
gi = (gk · gi ) gk = Aki gk . (4.4.36)
The inverse relation between the contravariant base vectors in both systems of coordinates is = T mk ur gkr gm
de£ned by v = T mk uk gm = v m gm . (4.4.49)
¡ ¢ ¡ ¢ i
gi = 1gi = gk ⊗ gk gi = gk · gi gk = B k gk . (4.4.37) Furthermore with the linear mapping,
With this relationship the contravariant base vectors are represented by the transformed (over-
w = Sv , with w ∈ V , and S ∈ V ⊗ V∗ , (4.4.50)
lined) contravariant base vectors,
m and the linear mapping (4.4.48) the associative law for the linear mappings holds
gi = Bgi , and B = (gk · gm ) gk ⊗ gm = B k gk ⊗ gm , (4.4.38)
¡ ¢ i
gi = gk · gi gk = B k gk , (4.4.39) w = Sv = S (T · u) = (ST) u. (4.4.51)
The second linear mapping w = Sv in the mixed formulation with the contravariant base vectors
and the contravariant base vectors are represented by the transformed (overlined) covariant base
gi ∈ V is given by
vectors,
S = Sji gi ⊗ gj . (4.4.52)
¡ ¢ km
gi = Bgi , and B = gk · gm gk ⊗ gm = B gk ⊗ gm , (4.4.40) Then the vector w in index notation with the results of the equations (4.4.49), (4.4.51) and
¡ ¢ ki (4.4.52) is rewritten as
gi = gk · gi gk = B gk . (4.4.41) ¡ ¢¡ ¢
w = S (Tu) = Sji gi ⊗ gj T mk uk gm
There exist the following relations between the transformation tensors A and A,
= Sji T mk uk δm
j
gi
AA = 1 , or AA = 1 (4.4.42)
k m
= Sji T jk uk gi = wi gi ,
Ami A m = δik , i.e. A i Akm = δik (4.4.43)
and the coef£cients of the vector are given by
and for the inverse transformation tensors B and B
wi = Sji T jk uk . (4.4.53)
BB = 1 , or BB = 1 (4.4.44)
For the second order tensor product ST exists in general four representations with all possible
m i
Bmi B k = δki , i.e. B m Bkm = δki . (4.4.45) combinations of base vectors
i
Furthermore exists a relation between the transformation tensors A and the retransformation S · T = Sm T mk gi ⊗ gk covariant basis, (4.4.54)
tensor B S·T= Sim Tkm gi ⊗ gk contravariant basis, (4.4.55)
k
Ami Bmk = δik ; Bmk = A m , (4.4.46) S·T= S im Tmk gi ⊗ gk mixed basis, (4.4.56)
and a relation between the transformation tensors B and the retransformation tensor A, and
k
Bmk = A m. (4.4.47) S · T = Sim T mk gi ⊗ gk mixed basis. (4.4.57)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
106 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 107

Lemma 4.4. The result of the tensor product of two dyads is the scalar product of the inner Distributive law for the tensor product:
vectors of the dyads and the dyadic product of the outer vectors of the dyads. The tensor product
of two dyads of vectors is denoted by (R + S) T = RT + ST (4.4.67)
(a ⊗ b) (c ⊗ d) = (b · c) a ⊗ d. (4.4.58)
In general cases NO commutative law:
With this rule the index notation of equations (4.4.53) up to (4.4.57) is easily computed, for
example equation (4.4.53) or (4.4.54) implies ST 6= TS (4.4.68)
¡ i ¢¡ ¢
ST = Sm gi ⊗ gm T nk gn ⊗ gk
Transpose of a tensor product:
i
= Sm T nk δnm gi ⊗ gk ,
(ST)T = TT ST (4.4.69)
and £nally
i
ST = Sm T mk gi ⊗ gk . (4.4.59) Inverse of a tensor product:

The "multiplication"or composition of two linear mappings S and T is called a tensor product (ST)−1 = T−1 S−1 if S and T are nonsingular. (4.4.70)
P = S · T = ST (tensor · tensor = tensor). (4.4.60)
Determinant of a tensor product:
The linear mappings w = Sv and v = Tu with the vectors u, v, and w ∈ V are composed like
det (ST) = det S det T (4.4.71)
w = Sv = S (Tu) = (ST) u = STu = Pu. (4.4.61)

This "multiplication"is, like in matrix calculus, but not like in "normal"algebra (a · b = b · a), Trace of a tensor product:
noncommutative, i.e.
ST 6= TS. (4.4.62) tr(ST) = tr(TS) (4.4.72)
For the three second order tensors R, S, T ∈ V ⊗ V∗ and the scalar quantity α ∈ R the following Proof of equation (4.4.63).
identities for tensor products hold.
α (ST) = (αS) T = S (αT) ,
Multiplication by a scalar quantity:
with the assumption
α (ST) = (αS) T = S (αT) (4.4.63)
(αS) v = α (Sv) , with v ∈ V, (4.4.73)
Multiplication by the identity tensor:
with this
1T = T1 = T (4.4.64)

Existence of a zero tensor: [α (ST)] v = α [S (Tv)] = (αS) (Tv) = [(αS) T] v,

0T = T0 = 0 (4.4.65) and £nally

Associative law for the tensor product: (αST) = (αS) T. (4.4.74)


(RS) T = R (ST) (4.4.66)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
108 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 109

Proof of equation (4.4.64). Proof of equation (4.4.67).

1T = T1 = T , with 1 ∈ V ⊗ V, (R + S) T = RT + ST,

with the assumption of equation (4.4.61), with the well known condition for a linear mapping,

(R + S) v = Rv + Sv, (4.4.80)
S (Tv) = (ST) v, (4.4.75)
with this and equation (4.4.61),
and the identity
[(R + S) T] v = (R + S) (Tv) = R (Tv) + S (Tv) = (RT) v + (ST) v,
1v = v, (4.4.76)
and £nally
this implies
(R + S) T = RT + ST. (4.4.81)
(1T) v = 1 (Tv) = Tv, (4.4.82)
(T1) v = T (1v) = Tv,

and £nally Proof of equation (4.4.69).

1T = T1 = T. (4.4.77) (ST)T = TT ST ,

with the de£ntion


¡ ¢T
Proof of equation (4.4.66). ST = S, (4.4.83)

(RS) T = R (ST) , with R, S, T ∈ V ⊗ V, which implies


³ ´T
with the assumption of equation (4.4.61) and inserting it into equation (4.4.66), (ST)T = ST, (4.4.84)

[(RS) T] v = (RS) (Tv) = (RS) w , with v, w ∈ V, (4.4.78) and this equation only holds, if
³ ´T ¡ ¢T
with equation (4.4.61) again, (ST)T = TT ST = ST. (4.4.85)

(RS) w = R (Sw) , and w = Tv,


[(RS) T] v = R [S (Tv)] = R [(ST) v] = R (Lv) = (RL) v Proof of equation (4.4.70).

and £nally (ST)−1 = T−1 S−1 ,

(RS) T = R (ST) . (4.4.79) and if the inverses T−1 and S−1 exist, then

(ST) (ST)−1 = 1, (4.4.86)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
110 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 111

with equations (4.4.64) and (4.4.66), Multiplication by a scalar quantity:


£ ¤
S−1 (ST) (ST)−1 = S−1 1 = S−1 , (4.4.87) (αT) : S = T : (αS) = α (T : S) (4.4.97)

Existence of an additive identity:


and equation (4.4.61) implies
£ ¤ £ ¤ T : S = 0 , and if T is arbitrary, then S = 0. (4.4.98)
S−1 (ST) (ST)−1 = S−1 (ST) (ST)−1 , (4.4.88)
Existence of a positive de£nite tensor:
and (
¡ ¢ > 0 , if T 6= 0
¡ ¢
S−1 (ST) = S−1 S T = T, (4.4.89) T : T = tr TTT , i.e. T is positive de£nite. (4.4.99)
= 0 , iff T = 0

with equation (4.4.88) inserted in (4.4.89) and comparing with equation (4.4.87), Absolute value or norm of a tensor:
p
T (ST)−1 = S−1 , (4.4.90) |T| = tr (TTT ) (4.4.100)
£ ¤
T −1
T (ST)−1 = T−1 S−1 , (4.4.91) The Schwarz inequality:

and with equation (4.4.61) and (4.4.90), |TS| ≤ |T| |S| (4.4.101)
−1
£ −1 ¤ ¡ ¢ −1 −1 For the norms of tensors, like for the norms of vectors, the following identities hold,
T T (ST) = T−1 T (ST) = 1 (ST) , (4.4.92)
|αT| = |α| |T| , (4.4.102)
and £nally comparing this with equation (4.4.91)
|T + S| ≤ |T| + |S| . (4.4.103)
(ST)−1 = T−1 S−1 . (4.4.93) And as a rule of thumb:
Lemma 4.5. The result of the scalar product of two dyads is the scalar product of the £rst vectors
of each dyad and the scalar product of the second vectors of each dyad. The scalar product of
two dyads of vectors is denoted by
4.4.4 The Scalar Product or Inner Product of Tensors
(a ⊗ b) : (c ⊗ d) = (a · c) (b · d) . (4.4.104)
The scalar product of tensors is de£ned by
With this rule the index notation of equation (4.4.104) implies for example
T : (v ⊗ w) = vTw , with v, w ∈ V , and T ∈ V ⊗ V∗ . (4.4.94) ¡ ¢ ¡
S : T = S im gi ⊗ gm : Tnk gn ⊗ gk
¢

For the three second order tensors R, S, T ∈ V ⊗ V∗ and the scalar quantity α ∈ R the following = S im Tnk δin δm
k

identities for scalar products of tensors hold.


and £nally

Commutative law for the scalar product of tensors: S : T = S nm Tnm . (4.4.105)


And for the other combinations of base vectors the results are
S:T=T:S (4.4.95)
S : T = Snm T nm (4.4.106)
Distributive law for the scalar product of tensors: S : T = S nm Tnm (4.4.107)
S : T = Snm T nm . (4.4.108)
T : (R + S) = T : R + T : S (4.4.96)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
112 Chapter 4. Vector and Tensor Algebra 4.5. Special Tensors and Operators 113

4.5 Special Tensors and Operators and in index notation,


¡ ¢ ¡ ¢
4.5.1 The Determinant of a Tensor in Cartesian Coordinates gk ⊗ gk : ai gi ⊗ bj gj = ai bj δki δjk = ak bk . (4.5.11)

It is not absolutely correct to speak about the determinant of a tensor, because it is only the The trace of a product of two tensors S and T is de£ned by
determinant of the coef£cients of the tensor in Cartesian coordinates 1 and not of the whole tensor
¡ ¢
itself. For the different notations of a tensor with covariant, contravariant and mixed coef£cients tr STT = S : T (4.5.12)
the determinant is given by
£ ¤ £ ¤ £ ¤ and easy to proof just writting it in index notation. Starting with this some more important
det T = det T ij = det [Tij ] = det T ij = det Ti j . (4.5.1) identities could be found,
Expanding the determinant of the coef£cient matrix of a tensor T works just the same as for any
tr T = tr TT , (4.5.13)
other matrix. For example the determinant could be described with the permutation symbol ε,
like in equation (4.2.35) tr (ST) = tr (TS) , (4.5.14)
¯ ¯ tr (RST) = tr (TRS) = tr (STR) , (4.5.15)
¯T11 T12 T13 ¯
¯ ¯ tr [T (R + S)] = tr (TR) + tr (TS) , (4.5.16)
det T = det [Tmn ] = ¯T21 T22 T23 ¯¯ = T1i · T2j · T3k · εijk .
¯ (4.5.2)
¯T31 T32 T33 ¯ tr [(αS) T] = α tr (ST) , (4.5.17)
(
¡ ¢ > 0 , if T 6= 0,
Some important identities are given without a proof by T : T = tr TTT , i.e. the tensor T is positive de£nite (4.5.18)
= 0 , iff T = 0,
p
det (αT) = α3 det T, (4.5.3) |T| = tr (TTT ) the absolute value of a tensor T, (4.5.19)
det (TS) = det T det S, (4.5.4)
det TT = det T, (4.5.5) and £nally the inequality
2
(det Q) = 1 , if Q is an orthogonal tensor, (4.5.6)
|S : T| ≤ |S| |T| . (4.5.20)
−1 −1 −1
det T = (det T) if T exists. (4.5.7)

4.5.3 The Volumetric and Deviator Tensor


4.5.2 The Trace of a Tensor
Like for the symmetric and skew parts of a tensor there are also a lot of notations for the volumet-
The inner product of a tensor T with the identity tensor 1 is called the trace of a tensor,
ric and deviator parts of a tensor. The volumetric part of a tensor in the 3-dimensional Euclidean
vector space E3 is de£ned by
tr T = 1 : T = T : 1. (4.5.8)
1
The same statement written in index notation, TV = Tvol = (tr T) 1 , and T ∈ E 3 ⊗ E3 . (4.5.21)
n
¡ ¢ ¡ ¢
gk ⊗ gk : T ij gi ⊗ gj = T ij δki δjk = Tkk , (4.5.9)
It is important to notice that all diagonal components V (i)(i) are equal and all the other compo-
and in this way it is easy to see that the result is a scalar. For the dyadic product of two vectors nents equals zero
the trace is given by the scalar product of the two involved vectors, V ij = 0 if i 6= j. (4.5.22)
The deviator part of a tensor is given by
tr (a ⊗ b) = 1 : (a ⊗ b) = a · (1 · b) = a · b, (4.5.10)
1
To compute the determinant of a second order tensor in general coordinates is much more complicated, and this 1
TD = Tdev = dev T = T − Tvol = T − TV = T − (tr T) 1. (4.5.23)
is not part of this lecture/script, for any details see for example DE B OER [3]. n

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
114 Chapter 4. Vector and Tensor Algebra 4.5. Special Tensors and Operators 115

4.5.4 The Transpose of a Tensor 4.5.5 The Symmetric and Antisymmetric (Skew) Tensor
T
The transpose T of a second order tensor T is de£ned by There are a lot of different notations for the symmetric part of a tensor T, for example

w · (T · v) = v · (TT · w) , and v, w ∈ V , T ∈ V ⊗ V. (4.5.24) TS = Tsym = sym T, (4.5.41)


For a dyadic product of two vectors the transpose is assumed as
and for the antisymmetric or skew part of a tensor T
w · [(a ⊗ b) · v] = v · [(b ⊗ a) · w] , (4.5.25)
TA = Tasym = skew T. (4.5.42)
and
A second rank tensor is said to be symmetric, if and only if
(a ⊗ b)T = (b ⊗ a) . (4.5.26)
T = TT . (4.5.43)
The left-hand side of equation (4.5.25),
And a second rank tensor is said to be antisymmetric or skew, if and only if
w · [(a ⊗ b) · v] = (w · a) (b · v) , (4.5.27)
T = −TT . (4.5.44)
and the right-hand side of equation (4.5.25)
The same statement in index notation
v · [(b ⊗ a) · w] = (v · b) (a · w) = (a · w) (v · b) , (4.5.28)

are equal, q.e.d. For the transpose of a tensor the following identities hold, T ij = T ji , if T is symmetric, (4.5.45)
T ij = −T ji , if T is antisymmetric. (4.5.46)
(a ⊗ b)T = (b ⊗ a), (4.5.29)
T T
(T ) = T, (4.5.30) Any second rank tensor can be written as a sum of a symmetric tensor and an antisymmetric
T tensor,
1 = 1, (4.5.31)
(S + T)T = ST + TT , (4.5.32) 1 1
T = TS + TA = TTsym + TTasym = sym T + skew T = (T + TT ) + (T − TT ). (4.5.47)
T T 2 2
(αT) = αT , (4.5.33)
T
(S · T) = T · S . T T
(4.5.34) The symmetric part of a tensor is de£ned by

The index notations w.r.t. to the different basis are given by 1


TS = Tsym = sym T = (T + TT ), (4.5.48)
2
T = T ij gi ⊗ gj ⇒ TT = T ij gj ⊗ gi = T ji gi ⊗ gj , (4.5.35)
and the antisymmetric (skew) part of a tensor is de£ned by
T = Tij gi ⊗ gj ⇒ TT = Tij gj ⊗ gi = Tji gi ⊗ gj , (4.5.36)
1
T = Tji gi ⊗ gj ⇒ TT = Tji gj ⊗ gi = Tij gi ⊗ gj , (4.5.37) TA = Tasym = skew T = (T − TT ). (4.5.49)
2
T= Tij gi ⊗ gj ⇒ T = T
Tij gj ⊗g = i
Tji gi j
⊗g , (4.5.38)

and the relations between the tensor components, 4.5.6 The Inverse of a Tensor
¡ ij ¢T The inverse of a tensor T exists, if for any two vectors v and w the expression,
T = T ji , or (Tij )T = Tji , (4.5.39)
w = Tv, (4.5.50)
and
¡ ¢T ¡ ¢T could be transformed in
T ij = Tj i , or Ti j = T ji . (4.5.40) v = T−1 w. (4.5.51)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
116 Chapter 4. Vector and Tensor Algebra 4.5. Special Tensors and Operators 117

Comparing these two equations gives 4.5.8 The Polar Decomposition of a Tensor
¡ ¢
−1 −1
TT−1 = T−1 T = 1 , and T = T. (4.5.52) The polar decomposition of an nonsingular second order tensor T is given by

The inverse of a tensor, T = RU , or T = VR , with T∈V⊗V , and det T 6= 0. (4.5.62)

T−1 exists, if and only if det T 6= 0. (4.5.53) In the polar decomposition the tensor R = Q is chosen as an orthogonal tensor, i.e. R T = R−1
and det R = ±1. In this case the tensors U and V are positive de£nite and symmetric tensors.
The inverse of a product of a scalar and a tensor is de£ned by The tensor U is named the right-hand Cauchy strain tensor and V is named the left-hand Cauchy
1 −1 strain tensor. Both describe the strains, e.g. if a ball (a circle) is deformed in an ellipsoid (an
(αT)−1 = T , (4.5.54) ellipse), on the opposite R represents a rotation. The £gure (4.6) implies the de£nition of the
α
and the inverse of a product of two tensors is de£ned by

(ST)−1 = T−1 S−1 . (4.5.55) ∗


dz
µ
4.5.7 The Orthogonal Tensor
3
An orthogonal tensor Q satis£es R
U
QQT = QT Q = 1 , i.e. Q−1 = QT . (4.5.56) µ j
P
P dX F=T dx
From this it follows that the mapping w = Qv with ww = vv implies - -

wQv = vQT w = vQ−1 w. (4.5.57) 1


z
The orthogonal mappings of two arbitrary vectors v and w is rewritten with the de£nition of the R dz V
-
transpose (4.5.24)

(Qw) · (Qv) = wQT · Qv, (4.5.58)

and with the de£nition of the orthogonal tensor (4.5.56)


Figure 4.6: The polar decomposition.
(Qw) · (Qv) = w · v. (4.5.59)

The scalar product of two vectors equals the scalar product of their orthogonal mappings. And vectors
for the square value of a vector and its orthogonal mapping equation (4.5.59) denotes
dz = R · dX, (4.5.63)
(Qv)2 = v2 . (4.5.60)

Sometimes even equation (4.5.59) and not (4.5.56) is used as de£nition of an orthogonal tensor. and
The orthogonal tensor Q describes a rotation. For the special case of the Cartesian basis the
components of the orthogonal tensor Q are given by the cosine of the rotation angle dx = V · dz. (4.5.64)

Q = qik ei ⊗ ek ; qik = cos (^ (ei ; ēk )) and det Q = ±1. (4.5.61) The composition of this two linear mappings is given by

If det Q = +1, then the tensor is called proper orthogonal tensor or a rotator. dx = V · R · dX = F · dX, (4.5.65)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
118 Chapter 4. Vector and Tensor Algebra 4.5. Special Tensors and Operators 119

The other possible way to describe the composition is with the vector d z,
∗ t3
da3 6 µ

dx = R · dz, (4.5.66) g3 6
g2
and *


dz = U · dX, (4.5.67)

and £nally
^
dx = R · U · dX = F · dX, (4.5.68) g1
The composed tensor F is called the deformation gradient and its polar decomposition is given
by
T ≡ F = R · U = V · R. (4.5.69)
Figure 4.7: An example of the physical components of a second order tensor.
4.5.9 The Physical Components of a Tensor

In general a tensor w.r.t. the covariant basis gi is given by The de£nition of the physical stresses τ ik is given by
T = T ik gi ⊗ gk . (4.5.70) ∗ gk ¯¯ ¯
df i = τ ik da(i) ¯ ,
∗ |gk |
The physical components T ik are de£ned by ∗ gk p
df i = τ ik √ da(i) g (i)(i) . (4.5.77)
∗ gi gk g(k)(k)
T = T ik ⊗
|gi | |gk | Comparing equations (4.5.74) and (4.5.77) implies

T ik à p !
=√ √ gi ⊗ g k , (4.5.71) ik ∗ ik g (i)(i) p
g(i)(i) g(k)(k) τ −τ √ da(i) g (i)(i) = 1,
∗ g(k)(k)
√ √
T ik = T ik g(i)(i) g(k)(k) . (4.5.72)

ik
The stress tensor T = T gi ⊗ gk is given w.r.t. to the bais gi . Then the associated stress vector and £nally the de£nition for the physical components of the stress tensor τ ik is given by
ti w.r.t. a point in the sectional area dai is de£ned by √
∗ g(k)(k)
df i τ ik = τ ik p . (4.5.78)
ti = ; df i = ti da(i) , (4.5.73) g (i)(i)
da(i)
ti = τ ik gk ; df i = τ ik gk da(i) , (4.5.74) 4.5.10 The Isotropic Tensor
i
with the differential force df . Furthermore the sectional area and its absolute value are given by An isotropic tensor is a tensor, which has in every rotated coordinate system, if it is a Cartesian
coordinate system the same components. Every tensor with order zero, i.e. a scalar quantity, is
dai = da(i) g(i) , (4.5.75) an isotropic tensor, but no £rst order tensor, i.e. a vector, could be isotropic. The unique isotropic
second order tensor is the Kronecker delta, see section (4.1).
and
¯ ¯
|dai | = da(i) ¯g(i) ¯ ,
p
|dai | = da(i) g (i)(i) . (4.5.76)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
120 Chapter 4. Vector and Tensor Algebra 4.6. The Principal Axes of a Tensor 121

4.6 The Principal Axes of a Tensor condition the shear stresses in orthogonal sections are equal

4.6.1 Introduction to the Problem T ik = T ki , and (4.6.2)


T = TT = T ki ei ⊗ ek . (4.6.3)
The computation of the . . .
The stress vector in the section surface, given by e1 = e1 , is de£ned by the linear mapping of
• invariants, the normal unit vector e1 with the tensor T

• eigenvalues and eigenvectors, t1 = T · (−e1 ) (4.6.4)


= −T ik (ei ⊗ ek )e1
• vectors of associated directions, and
= −T ik δk1 ei ,
• principal axes t1 = −T 1i e1 . (4.6.5)

is described for the problem of the principal axes of a stress tensor, also called the directions of The stress tensor T assigns the resulting stress vector t(n) to the direction of the normal vector
principal stress. The Cauchy stress tensor in Cartesian coordinates is given by perpendicular to the section surface n. This linear mapping ful£lls the equilibrium conditions

T = T ik ei ⊗ ek . (4.6.1) t(n) = T · n, (4.6.6)

This stress tensor T is symmetric because of the equilibrium conditon of moments. With this with the normal vector

n = n l el , (4.6.7)
k
¡ ¢
n·e = nl δlk k
= n = cos n, e k
,
t(n) dA(n)
x3 6 ±
n and the absolute value
¸
|n| = 1.
t(3) dA(3)
I The stress vector in direction of n is computed by
1
t(1) dA(1)
¡ ¢
t(n) = T ik ei ⊗ ek · nl el (4.6.8)
e3 6 = T ik nl (ek · el ) ei
- - = T ik nl δkl ei ,
e2 x2 (n)
ª t = T ik nk ei . (4.6.9)
e1
−t(2) dA(2) The action of the tensor T on the normal vector n reduces the order of the second order tensor
R T (stress tensor) to a £rst order tensor t (n) (i.e. the stress vector in direction of n).
1 Lemma 4.6. Principal axes problem Exists a direction no in space, in such a way, that the
ª x
resulting stress vector t(n0 ) is oriented in this direction, i.e. that the vector n0 ful£lls the following
equations?

t(n0 ) = λn0 (4.6.10)


Figure 4.8: Principal axis problem with Cartesian coordinates.
= λ1 · n0 . (4.6.11)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
122 Chapter 4. Vector and Tensor Algebra 4.6. The Principal Axes of a Tensor 123

Comparing equations (4.6.6) and (4.6.11) leads to T · n 0 = λ1 · n0 and therefore to equation (4.6.12) is denoted like this
³ ´
(T − λ1) · n0 = 0. (4.6.12) T̃ ik gi ⊗ gk − λg ik gi ⊗ gk · nl0 gl = 0, (4.6.19)
³ ´
For this special case of eigenvalue problem . . . T̃ ik − λg ik nl0 (gk · gl ) gi = 0,
³ ´
• the directions n0j are called the principal stress directions, and they are given by the eigen- T̃ ik − λg ik nl0 gkl gi = 0,
vectors. ³ ´
T̃ ik − λg ik n0k gi = 0,
• and the λj = τj are called the eigenvalues or resp. in this case the principal stresses.
and £nally in index notation
4.6.2 Components in a Cartesian Basis ³ ´
T̃ ik − λg ik = 0,
The equation (4.6.12) is rewritten with index notaion ³ ´
¡ ik ¢ det T̃ ik − λg ik = 0. (4.6.20)
T ei ⊗ ek − λδ ik ei ⊗ ek · nl0 el = 0, (4.6.13)
¡ ik ¢
T − λδ ik nl0 (ek · el ) ei = 0, And the same in mixed formulation is given by
¡ ik ¢
T − λδ ik nl0 δkl ei = 0, ¡ ¢
¡ ik ¢ Tki − λδki nk0 = 0,
T − λδ ik n0k ei = 0, ¡ i ¢
det Tk − λδki = 0. (4.6.21)
and £nally
¡ ¢ 4.6.4 Characteristic Polynomial and Invariants
T ik − λδ ik n0k = 0. (4.6.14)
The characteristic polynomial of an eigenvalue problem with the invariants I 1 , I2 , and I3 in a
This equation could be represented in matrix notation, because it is given in a Cartesian basis, 3-dimensional space E3 is de£ned by
¡£ ik ¤ £ ¤¢
T − λ δ ik [n0k ] = [0] , (4.6.15) f (λ) = I3 − λI2 + λ2 I1 − λ3 = 0. (4.6.22)
(T − λ1) n0 = 0,
 11      For a Cartesian basis the equation (4.6.22) becomes a cubic equation, because of being an eigen-
T −λ T 12 T 13 n01 0
 T 21 22
T −λ T 23  
· n02  = 0 . (4.6.16) value problem in E3 with the invariants given by
T 31 T 32 T 33 − λ n03 0
I1 = tr T = gik T̃ ik = δik T ik = Tkk = T kk , (4.6.23)
This is a linear homogenous system of equations for n01 , n02 and n03 , then non-trivial solutuions 1£ ¤ 1 £ ii kk ¤
I2 = (tr T)2 − tr (T)2 = T T − T ik T ki , (4.6.24)
exist, if and only if 2 ¡ ¢ 2
det(T − λ1) = 0, (4.6.17) I3 = det T = det T ik . (4.6.25)
or in index notation
The fundamental theorem of algebra implies that there exists three roots λ 1 , λ2 , and λ3 , such that
det(T ik − λδ ik ) = 0. (4.6.18)
the following equations hold

4.6.3 Components in a General Basis I1 = λ 1 + λ 2 + λ 3 , (4.6.26)


ik I2 = λ 1 λ 2 + λ 2 λ 3 + λ 3 λ 1 , (4.6.27)
In a general basis with a stress tensor with covariant base vectors T = T̃ gi ⊗ gk , a normal
vector n0 = nl0 gl , and a unit tensor with covariant base vectors 1 = G = g ik gi ⊗ gk , too the I3 = λ 1 λ 2 λ 3 . (4.6.28)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
124 Chapter 4. Vector and Tensor Algebra 4.6. The Principal Axes of a Tensor 125

4.6.5 Principal Axes and Eigenvalues of Symmetric Tensors 4.6.7 Example


The assumption that the tensor is symmetric is denoted in tensor and index notation like this Compute the characteristic polynomial, the eigenvalues and the eigenvectors of this matrix
TT = T, T ik = T ki and resp. in matrix notation T T = T . The eigenvalue problem with  
λj 6= λl is given in matrix notation by 1 −4 8
−4 7 4 .
(T − λ1) n0 = 0, (4.6.29) 8 4 1
in matrix notation for an an arbitrary λj
By expanding the determinant det (A − λ1) the characteristic polynomial becomes
+nT0l · | (T − λj 1) n0j = 0, (4.6.30)
p (λ) = −λ + 9λ2 + 81λ − 729,
and with an arbitrary λl
with the (real) eigenvalues
−nT0j · | (T − λl 1) n0l = 0. (4.6.31) λ1 = −9, λ2,3 = 9.
The addition of the last two equations leads to the relation, For this eigenvalues the orthogonal eigenvectors are established by
nT0l T n0j − λj nT0l n0j − nT0j T n0l + λl nT0j n0l = 0. (4.6.32)      
2 1 −2
A quadratic form implies x1 =  1  , x2 =  2  und x3 =  2  .
¡ ¢T
nT0j T n0l = nT0j T n0l = nT0l T T n0j = nT0l T n0j , (4.6.33) −2 2 −1

i.e.
4.6.8 The Eigenvalue Problem in a General Basis
(λl − λj ) nT0j n0l ≡ 0. (4.6.34) Let T be an arbitrary tensor, given in the basis gi with i = 1, . . . , n, and de£ned by
This equation holds, if and only if λl − λj 6= 0 and nT0j n0l = 0. The conclusion of this relation
between the eigenvalues λj , λl , and the normal unit vectors n0j , and n0l is, that the normal vectors T = T̃ ik gi ⊗ gk . (4.6.39)
are orthogonal to each other.
The identity tensor in Cartesian coordinates 1 = δ ik ei ⊗ ek is substituted by the identity tensor
1, de£ned by
4.6.6 Real Eigenvalues of a Symmetric Tensors 1 = g ik gi ⊗ gk . (4.6.40)
Two complex conjugate eigenvalues are denoted by Then the eigenvalue problem
λj = β + iγ, (4.6.35) (T − λ1) · n0 = 0 (4.6.41)
λl = β − iγ. (4.6.36) is substituted by the eigenvalue problem in general coordinates given by
The coordinates of the associated eigenvectors n0j and n0l could be written as column matrices ³ ´
n0j and n0l . Furthermore the relations n0j = b + ic and n0l = b + ic hold. Comparing this with T̃ ik gi ⊗ gk − λg ik gi ⊗ gk · nl0 gl = 0, (4.6.42)
the equation (4.6.34) implies
with the vector n0 = nl0 gl in the direction of a principal axis. Begining with the eigenvalue
(λl − λj ) nT0j n0l = 0, problem
2iγ (b + ic)T (b − ic) = 0, ³ ´
¡ ¢
2iγ bT b + cT c = 0, (4.6.37) T̃ ik − λg ik · nl0 (gk · gl ) gi = 0
³ ´
bT b + cT c 6= 0, (4.6.38) T̃ ik − λg ik nl0 gkl gi = 0
³ ´
i.e. γ = 0 and the eigenvalues are real numbers. The result is a symmetric stress tensor with
T̃ ik − λg ik n0k gi = 0, (4.6.43)
three real principale stresses and the associated directions being orthogonal to each other.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
126 Chapter 4. Vector and Tensor Algebra 4.7. Higher Order Tensors 127

and with 4.7 Higher Order Tensors


n0k 6= 0, (4.6.44) 4.7.1 Review on Second Order Tensor
results £nally the condition A complete second order tensor T maps a vector u for example in the vector space V, like this
³ ´
ik ik
T̃ − λg n0k = 0. (4.6.45) v = Tu , with u, v ∈ V , and T ∈ V ⊗ V. (4.7.1)
The tensor T could be written in the mixed notation,
For example in index notation with a vector basis gi ∈ V, a vector is given by
T= T̃ki gi k
⊗g , (4.6.46)
u = u i gi = u i g i , (4.7.2)
and with the identity tensor also in mixed notation,
and a second order tensor by
1 = δki gi ⊗ gk . (4.6.47)
T = T jk gj ⊗ gk , with gj , gk ∈ V. (4.7.3)
The eigenvalue problem
¡ ¢ ¡ ¢ Than a linear mapping with a second order tensor is given by
Tki − λδki · nl0 gk · gl gi = 0
¡ i ¢ v = T jk (gj ⊗ gk ) ui gi (4.7.4)
Tk − λδki · nl0 δlk gi = 0, (4.6.48)
= T jk ui (gk · gi ) gj
implies the condition ¡ ¢ = T jk ui gki gj ,
Tki − λδki n0k = 0. (4.6.49)
v = Tij ui gj = v j gj . (4.7.5)

But the matrix [Tki ] = T is nonsymmetric. For this reason it is necessary to control the orthogo-
nality of the eigenvectors by a decomposition. 4.7.2 Introduction of a Third Order Tensor
After having a close look at a second order tensor, and realizing that a vector is nothing else but
a £rst order tensor, it is easy to understand, that there might be also a higher order tensor. In the
same way like a second order tensor maps a vector onto another vector, a complete third order
tensor maps a vector onto a second order tensor. For example in index notation with a vector
basis gi ∈ V, a vector is given

u = u i gi = u i g i , (4.7.6)

and a complete third order tensor by


3
A = Ajkl gj ⊗ gk ⊗ gl , with gj , gk , gl ∈ V. (4.7.7)

Than a linear mapping with a third order tensor is given by

T = Ajkl (gj ⊗ gk ⊗ gl ) ui gi (4.7.8)


= Ajkl ui (gl · gi ) (gj ⊗ gk )
= Ajkl ui gli (gj ⊗ gk ) ,
T = Ajkl ul (gj ⊗ gk ) = T jk (gj ⊗ gk ) . (4.7.9)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
128 Chapter 4. Vector and Tensor Algebra 4.7. Higher Order Tensors 129

4.7.3 The Complete Permutation Tensor Really important is the so called elasticity tensor C used in elasticity theory. This is a fourth
order tensor, which maps the strain tensor ε onto the stress tensor σ,
The most important application of a third order tensor is the third order permutation tensor, which
is the correct description of the permutation symbol in section (4.2). The complete permutation σ = Cε. (4.7.18)
tensor is antisymmetric and in the space E3 just represented by a scalar quantity (positive or
negative), see also the permutation symbols e and ε in section (4.2). The complete permutation Comparing this with the well known unidimensional Hooke’s law it is easy to see that this map-
tensor in Cartesian coordinates or also called the third order fundamental tensor is de£ned by ping is the generalized 3-dimensional linear case of Hooke’s law. The elasticity tensor C has in
3 3 general in space E3 the total number of 34 = 81 components. Because of the symmetry of the
E = eijk ei ⊗ ej ⊗ ek , or E = eijk ei ⊗ ej ⊗ ek . (4.7.10) strain tensor ε and the stress tensor σ this number reduces to 36. With the potential character
For the orthonormal basis ei the components of the covariant eijk and contravariant e permu- ijk of the elastic stored deformation energy the number of components reduces to 21. For an elastic
tation tensor (symbol) are equal and isotropic material there is another reduction to 2 independent constants, e.g. the Young’s
ei × ej = eijk · ek . (4.7.11) modulus E and the Poisson’s ratio ν.

Equation (4.2.26) in section 3(4.2) is the short form of a product in Cartesian coordinates be-
tween the third order tensor E and the second order identity tensor. This scalar product of the 4.7.5 Tensors of Various Orders
permutation tensor and ei ⊗ ej from the right-hand side yields, Higher order tensor are represented with the dyadic products of vectors, e.g. a simple third order
¡ ¢ tensor and a complete third order tensor,
ei × ej = ersk er ⊗ es ⊗ ek : (ei ⊗ ej )
n n
= ersk δis δjk er = erij er , 3 X X
B= ai ⊗ b i ⊗ c i = Ti ⊗ ci = B ijk gi ⊗ gj ⊗ gk , (4.7.19)
ei × ej = eijr er , (4.7.12) i=1 i=1

or with ej ⊗ ei from the left-hand side yields and a simple fourth order tensor and a complete fourth order tensor,
¡ ¢
ei × ej = (ei ⊗ ej ) : ersk er ⊗ es ⊗ ek n n
X X
= ersk δir δjs ek , C= ai ⊗ b i ⊗ c i ⊗ d i = Si ⊗ Ti = C ijkl gi ⊗ gj ⊗ gk ⊗ gl . (4.7.20)
k i=1 i=1
ei × ej = eijk e . (4.7.13)
For example the tensors form order zero till order four are summarized in index notation with a
4.7.4 Introduction of a Fourth Order Tensor basis gi ,

The action of a fourth order tensor C, given by a scalar quantity, or a tensor of order zero
(0)
α = α, (4.7.21)
(1)
C = C ijkl (gi ⊗ gj ⊗ gk ⊗ gl ) , (4.7.14) a vector, or a £rst order tensor v = v = vi g , i
(4.7.22)
(2)
on a second order tensor T given by a second order tensor T = T = T jk gj ⊗ gk , (4.7.23)
3
mn a third order tensor B = B ijk gi ⊗ gj ⊗ gk , (4.7.24)
T=T (gm ⊗ gn ) , (4.7.15)
a fourth order tensor C = C = C ijkl gi ⊗ gj ⊗ gk ⊗ gl . (4.7.25)
is given in index notation, see also equation (4.4.104), by
¡ ¢
S = C : T = C ijkl gi ⊗ gj ⊗ gk ⊗ gl : (T mn gm ⊗ gn ) , (4.7.16)
= C ijkl T mn (gk · gm ) (gl · gn ) (gi ⊗ gj ) ,
= C ijkl T mn gkm gln (gi ⊗ gj ) = C ijkl Tkl (gi ⊗ gj ) ,
S = S ij gi ⊗ gj . (4.7.17)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
130 Chapter 4. Vector and Tensor Algebra

Chapter 5

Vector and Tensor Analysis

S IMMONDS [12], H ALMOS [6], A BRAHAM, M ARSDEN, and R ATIU [1], and M ATTHEWS [11].
And in german DE B OER [3], S TEIN ET AL . [13], and I BEN [7].

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 131
132 Chapter 5. Vector and Tensor Analysis 5.1. Vector and Tensor Derivatives 133

5.1 Vector and Tensor Derivatives


Chapter Table of Contents
5.1.1 Functions of a Scalar Variable
5.1 Vector and Tensor Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 133
A scalar function could be represented by another scalar quantity, a vector, or even a tensor,
5.1.1 Functions of a Scalar Variable . . . . . . . . . . . . . . . . . . . . . . 133
which depends on one scalar variable. These different types of scalar functions are denoted by
5.1.2 Functions of more than one Scalar Variable . . . . . . . . . . . . . . . 134
5.1.3 The Moving Trihedron of a Space Curve in Euclidean Space . . . . . . 135 β = β̂ (α) a scalar-valued scalar function, (5.1.1)
5.1.4 Covariant Base Vectors of a Curved Surface in Euclidean Space . . . . 138 v = v̂ (α) a vector-valued scalar function, (5.1.2)
5.1.5 Curvilinear Coordinate Systems in the 3-dim. Euclidean Space . . . . . 139
and
5.1.6 The Natural Basis in the 3-dim. Euclidean Space . . . . . . . . . . . . 140
5.1.7 Derivatives of Base Vectors, Christoffel Symbols . . . . . . . . . . . . 141 T = T̂ (α) a tensor-valued scalar function, (5.1.3)
5.2 Derivatives and Operators of Fields . . . . . . . . . . . . . . . . . . . . . . 143
5.2.1 De£nitions and Examples . . . . . . . . . . . . . . . . . . . . . . . . 143
The usual derivative w.r.t. a scalar variable α of the equation (5.1.1) is established with the
Taylor series of the scalar-valued scalar function β̂ (α) at a value α,
5.2.2 The Gradient or Frechet Derivative of Fields . . . . . . . . . . . . . . 143
¡ ¢
5.2.3 Index Notation of Base Vectors . . . . . . . . . . . . . . . . . . . . . 144 β̂ (α + τ ) = β̂ (α) + γ (α) · τ + O τ 2 . (5.1.4)
5.2.4 The Derivatives of Base Vectors . . . . . . . . . . . . . . . . . . . . . 145
The term γ (α) given by
5.2.5 The Covariant Derivative . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.2.6 The Gradient in a 3-dim. Cartesian Basis of Euclidean Space . . . . . . 146
β̂ (α + τ ) − β̂ (α)
5.2.7 Divergence of Vector and Tensor Fields . . . . . . . . . . . . . . . . . 147 γ (α) = lim , (5.1.5)
τ →0 τ
5.2.8 Index Notation of the Divergence of Vector Fields . . . . . . . . . . . 148
is the derivative of a scalar w.r.t. a scalar quantity. The usual representations of the derivatives
5.2.9 Index Notation of the Divergence of Tensor Fields . . . . . . . . . . . 148 dβdα are given by
5.2.10 The Divergence in a 3-dim. Cartesian Basis in Euclidean Space . . . . 149 dβ β̂ (α + τ ) − β̂ (α)
γ (α) = = β 0 = lim . (5.1.6)
5.2.11 Rotation or Curl of Vector and Tensor Fields . . . . . . . . . . . . . . 150 dα τ →0 τ
5.2.12 Laplacian of a Field . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 The Taylor series of the scalar-valued vector function, see equation (5.1.2), at a value α is given
5.3 Integral Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
by ¡ ¢
v̂ (α + τ ) = v̂ (α) + y (α) · τ + O τ 2 . (5.1.7)
5.3.1 De£nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.3.2 Gauss’s Theorem for a Vector Field . . . . . . . . . . . . . . . . . . . 155
The derivative of a vector w.r.t. a scalar quantity α is de£ned by
5.3.3 Divergence Theorem for a Tensor Field . . . . . . . . . . . . . . . . . 156 dv v̂ (α + τ ) − v̂ (α)
y (α) = = v0 = lim . (5.1.8)
5.3.4 Integral Theorem for a Scalar Field . . . . . . . . . . . . . . . . . . . 156 dα τ →0 τ
5.3.5 Integral of a Cross Product or Stokes’s Theorem . . . . . . . . . . . . 157 The total differential or also called the exact differential of the vector function v̂ (α) is given by
5.3.6 Another Interpretation of Gauss’s Theorem . . . . . . . . . . . . . . . 158
dv = v0 dα. (5.1.9)

The second derivative of the scalar-valued vector function v̂ (α) at a value α is given by
µ ¶
dy (α) d dv
= = v00 . (5.1.10)
dα dα dα

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
134 Chapter 5. Vector and Tensor Analysis 5.1. Vector and Tensor Derivatives 135

The Taylor series of the scalar-valued tensor function, see equation (5.1.3), at a value α is given With this partial derivatives β,i the exact differential of the function β is given by
by ¡ ¢
T̂ (α + τ ) = T̂ (α) + Y (α) · τ + O τ 2 . (5.1.11) dβ = β,i dαi . (5.1.24)
This implies the derivative of a tensor w.r.t. a scalar quantity, The partial derivatives of the vector-valued function (5.1.21) w.r.t. the scalar variable α i are
de£ned by
dT T̂ (α + τ ) − T̂ (α)
Y (α) = = T0 = lim . (5.1.12)
dα τ →0 τ ∂v v̂ (α1 , . . . , αi + τ, . . . , αn ) − v̂ (α1 , . . . , αi , . . . , αn )
= v,i = lim , (5.1.25)
In the following some important identities are listed, ∂αi τ →0 τ

(λv)0 = λ0 v + λv0 , (5.1.13) and its exact differential is given by


0 0 0 dv = v,i dαi . (5.1.26)
(vw) = v w + vw , (5.1.14)
0 The partial derivatives of the tensor-valued function (5.1.22) w.r.t. the scalar variable α i are
(v × w) = v0 × w + v × w0 , (5.1.15)
0 0 0
de£ned by
(v ⊗ w) = v ⊗ w + v ⊗ w , (5.1.16)
(Tv)0 = T0 v + Tv0 , (5.1.17) ∂T T̂ (α1 , . . . , αi + τ, . . . , αn ) − T̂ (α1 , . . . , αi , . . . , αn )
= T,i = lim , (5.1.27)
0 0
(ST) = S T + ST , 0
(5.1.18) ∂αi τ →0 τ
¡ −1 ¢0 and the exact differential is given by
T = −T−1 T0 T−1 . (5.1.19)
As a short example for a proof of the above identities the proof of equation (5.1.19) is given by dT = T,i dαi . (5.1.28)

TT−1 = 1,
¡ ¢0 ¡ ¢0 5.1.3 The Moving Trihedron of a Space Curve in Euclidean Space
TT−1 = T0 T−1 + T T−1 = 0,
¡ ¢0 A vector function x = x (Θ1 ) with one variable Θ1 in the Euclidean vector space E3 could be
⇒ −T0 T−1 = T T−1 ,
¡ ¢0 represented by a space curve. The vector x (Θ1 ) is the position vector from the origin O to the
⇒ T−1 = −T−1 T0 T−1 . point P on the space curve. The tangent vector t (Θ1 ) at a point P is then de£ned by
¡ ¢ ¡ ¢ dx (Θ1 )
5.1.2 Functions of more than one Scalar Variable t Θ1 = x 0 Θ1 = . (5.1.29)
dΘ1
Like for the functions of one scalar varaible it is also possible to de£nite varoius functions of The tangent unit vector or just the tangent unit of the space curve at a point P with the position
more than one scalar variable, e.g. vector x is de£ned by
β = β̂ (α1 , α2 , . . . , αi , . . . , αn ) a scalar-valued function of multiple variables, (5.1.20) ∂x ∂Θ1 dx
v = v̂ (α1 , α2 , . . . , αi , . . . , αn ) a vector-valued function of multiple variables, (5.1.21) t (s) = = , and |t (s)| = 1. (5.1.30)
∂Θ1 ∂s ds
and £nally The normal vector at a point P on a space curve is de£ned with the derivative of the tangent
vector w.r.t. the curve parameter s by
T = T̂ (α1 , α2 , . . . , αi , . . . , αn ) a tensor-valued function of multiple variables. (5.1.22)
∗ dt d2 x
n= = 2. (5.1.31)
In stead of establishing the total differentials like in the section before, it is now necessary to ds ds
establish the partial derivatives of the functions w.r.t. the various variables. Starting with the
The term 1/ρ is a measure of the curvature or just the curvature of a space curve at a point P .
scalar-valued function (5.1.20), the partial derivative w.r.t. the i-th scalar variable α i is de£ned ∗
by The normal vector n at a point P is perpendicular to the tangent vector t at this point,
∂β β̂ (α1 , . . . , αi + τ, . . . , αn ) − β̂ (α1 , . . . , αi , . . . , αn ) ∗ ∗
= β,i = lim . (5.1.23) n⊥t i.e. n · t = 0,
∂αi τ →0 τ

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
136 Chapter 5. Vector and Tensor Analysis 5.1. Vector and Tensor Derivatives 137

- n M
s = s (Θ1 ) P
µ
t (Θ1 ) t
q 1
∆x
x (Θ1 ) Q
R
1
e3 6 ρ
x (Θ1 + ∆Θ1 )
- ª
O e2 b
ª
e1 M

Figure 5.1: The tangent vector in a point P on a space curve. Figure 5.2: The moving trihedron.

and the curvature is given by The so called binormal unit vector b or just the binormal unit is the vector perpendicular to the
tangent vector t, and the normal vector n at a point P , and de£ned by
1 d2 x d2 x
= 2 · 2. (5.1.32) b = t × n. (5.1.34)
ρ2 ds ds
The absolute value |b| of the binormal unit is a measure for the torsion of the curve in space at a
The proof of this assumption starts with the scalar product of two tangent vectors,
point P , and the derivative of the binormal vector w.r.t. the curve parameter s implies,
t · t = 1, db dt dn
d = ×n+t×
(t · t) = 0, ds ds ds
ds ∗ dn
dt =n×n+t×
2 · t = 0, ds
ds 1
=0+ n
τ
and £nally results, that the scalar product of the derivative w.r.t. the curve parameter and the db 1
tangent vector equals zero, i.e. this two vectors are perpendicular to each other, = n. (5.1.35)
ds τ
dt This yields the de£nition
⊥t.
ds 1
of the torsion of a curve at a point P ,
This implies the de£nition τ
and with equation (5.1.35) the torsion is given by
1 ¯¯ ∗ ¯¯
= ¯n ¯ of the curvature of a curve at a point P . µ ¶ 3
ρ 1 dx d2 x dx
= −ρ2 × 2 · 3. (5.1.36)
τ ds ds ds
With the curvature 1ρ the normal unit vector n or just the normal unit is de£ned by
The three unit vectors t, n and b form the moving trihedron of a space curve in every point P .

n = ρ · n. (5.1.33) The derivatives w.r.t. the curve parameter ds are the so called Serret-Frenet equations, and given

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
138 Chapter 5. Vector and Tensor Analysis 5.1. Vector and Tensor Derivatives 139

below, With this relation the other metric coef£cients are given by

dt 1 d2 x aα3 = 0 , and a33 = 1, (5.1.43)


= n = 2 , and b = t × n, (5.1.37)
ds ρ ds
dn 1 1 1 1 and £nally the determinant of the metric coef£cients is given by
= − n × b − t × n = − t − b, (5.1.38)
ds ρ τ ρ τ
db 1 d a = det aα,β . (5.1.44)
= n = − (t × b) . (5.1.39)
ds τ ds The absolute value of a line element dx is computed by

5.1.4 Covariant Base Vectors of a Curved Surface in Euclidean Space ds2 = dx · x = x,α dΘα · x,β dΘβ ,

The vector-valued function x = x (Θ1 , Θ2 ) with two scalar variables Θ1 , and Θ2 represents = aα · aβ dΘα dΘβ = aαβ dΘα dΘβ ,
a curved surface in the Euclidean vector space E3 . The covariant base vectors of the curved
and £nally
q
a2 Θ2 ⇒ ds = aαβ dΘα dΘβ . (5.1.45)
a3 o ¸
1ds
The differential element of area dA is given by

P dA = a dΘ1 dΘ2 . (5.1.46)
*
z
a1 The contravariant base vectors of the curved surface are computed with the metric coef£cients
and the covariant base vectors,
e3 6
x Θ1 aα = aαβ aβ , (5.1.47)

- and the Kronecker delta is given by


O e2
e1 aαβ aβγ = δγα . (5.1.48)
ª

Figure 5.3: The covariant base vectors of a curved surface.


5.1.5 Curvilinear Coordinate Systems in the 3-dim. Euclidean Space
The position vector x in an orthonormal Cartesian coordinate system is given by
surface are given by
∂x ∂x x = x i ei . (5.1.49)
a1 = , and a2 = . (5.1.40)
∂Θ1 ∂Θ2
The curvilinear coordinates or resp. the curvilinear coordinate system is introduced by the
The metric coef£cients of the curved surface are computed with the base vectors, and the follow-
following relations between the curvilinear coordinates Θ i and the Cartesian coordinates xj and
ing prede£nition for the small greek letters,
base vectors ej ,
¡ ¢
aαβ = aα aβ , and α, β = 1, 2. (5.1.41) Θi = Θ̂i x1 , x2 , x3 . (5.1.50)
The inverses of this relations in the domain are explicit de£ned by
The normal unit vector of the curved surface, perpendicular to the vectors a 1 , and a2 , is de£ned ¡ ¢
by xi = x̂i Θ1 , Θ2 , Θ3 , (5.1.51)
a1 × a 2
n = a3 = . (5.1.42) if the following conditions hold, . . .
|a1 × a2 |

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
140 Chapter 5. Vector and Tensor Analysis 5.1. Vector and Tensor Derivatives 141

Θ3 Θ3

g2M
± g3
Θ1 Θ1
Θ2 Θ2
1
x3 g1

3 3
P P
e3 6 e3 6
x x
- -
O e2 x2 O e2
e1 e1
ª ª
x1

Figure 5.4: Curvilinear coordinates in a Cartesian coordinate system. Figure 5.5: The natural basis of a curvilinear coordinate system.

• the function is at least one-time continuous differentiable, position vectors of the points along this curvilinear coordinate. The vector x w.r.t. the covariant
basis is given by
• and the Jacobian or more precisely the determinant of the Jacobian matrix is not equal to x = x̄i gi . (5.1.55)
zero, · i¸ For each covariant natural basis gk an associated contravariant basis with the contravariant base
∂x vectors of the natural basis g i is de£ned by
J = det 6= 0. (5.1.52)
∂Θk
gk gi = δki . (5.1.56)
The vector x w.r.t. the curvilinear coordinates is represented by The vector x w.r.t. the contravariant basis is represented by
¡ ¢
x = x̂i Θ1 , Θ2 , Θ3 ei . (5.1.53) x = x̄¯i gi . (5.1.57)
The covariant coordinates x̄ and the contravariant coordinates x̄¯i of the position vector x are
i

connected by the metric coef£cients like this,


5.1.6 The Natural Basis in the 3-dim. Euclidean Space
x̄¯i = gik x̄k , (5.1.58)
A basis in the point P represented by the position vector x and tangential to the curvilinear
x̄i = g ik x̄¯k . (5.1.59)
coordinates Θi is introduced by

∂ x̂i (Θ1 , Θ2 , Θ3 ) 5.1.7 Derivatives of Base Vectors, Christoffel Symbols


gk = ei . (5.1.54)
∂Θk
The derivative of a covariant base vector gi ∈ E3 w.r.t. a coordinate Θk is again a vector, which
These base vectors gk are the covariant base vectors of the natural basis and form the so called could be described by a linear combination of the base vectors g 1 , g2 , and g3 ,
natural basis. In general these base vectors are not perpendicular to each other. Furthermore ∂gi !
this basis gk changes along the curvilinear coordinates in every point, because it depends on the = gi,k = Γsik gs . (5.1.60)
∂Θk
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
142 Chapter 5. Vector and Tensor Analysis 5.2. Derivatives and Operators of Fields 143

The Γsik are the components of the Christoffel symbol Γ(i) . The Christoffel symbol could be 5.2 Derivatives and Operators of Fields
described by a second order tensor w.r.t. the basis gi ,
5.2.1 De£nitions and Examples
Γ(i) = Γsij gs ⊗ gj . (5.1.61)
A function of an Euclidean vector, for example of a vector of position x ∈ E 3 is called a £eld.
With this de£nition of the Christoffel symbol as a second order tensor a linear mapping of the The £elds are seperated in three classes by their value,
base vector gk is given by
α = α̂ (x) the scalar-valued vector function or scalar £eld, (5.2.1)
¡ ¢
gi,k = Γ(i) · gk = Γsij gs ⊗ gj · gk (5.1.62) v = v̂ (x) the vector-valued vector function or vector £eld, (5.2.2)
s
¡ j ¢ s j
= Γij g · gk gs = Γij δk gs ,
and
and £nally
T = T̂ (x) the tensor-valued vector function or tensor £eld. (5.2.3)
gi,k = Γsik gs . (5.1.63) For example some frequently used £elds in the Euclidean vector space are,
Equation (5.1.63) is again the de£niton of the Christoffel symbol, like in equation (5.1.60). With • scalar £elds - temperature £eld, pressure £eld, density £eld,
this relation the components of the Christoffel symbol could be computed, like this
• vector £elds - velocity £eld, acceleration £eld,
gi,k · gs = Γrik gr · gs = Γrik δrs = Γsik . (5.1.64)
• tensor £elds - stress state in a volume element.
Like by any other second order tensor, the raising and lowering of the indices of the Christoffel
symbol is possible with the contravariant metric coef£cients g ls , and with the covariant metric 5.2.2 The Gradient or Frechet Derivative of Fields
coef£cients g ls , e.g.
Γikl = gls Γsik . (5.1.65) A vector-valued vector function or vector £eld v = v (x) is differentiable at a point P repre-
sented by a vector of position x, if the following linear mapping exists
Also important are the relations between the derivatives of the metric coef£cients w.r.t. to the
¡ ¢
coordinates Θi and the components of the Christoffel symbol, v̂ (x + y) = v̂ (x) + L (x) · y + O y2 , and |y| → 0, (5.2.4)
1 v̂ (x + y) − v̂ (x)
Γikl = (gkl,i + gil,k gik,l ) . (5.1.66) L (x) 1 = lim . (5.2.5)
2 |y|→0 |y|
The linear mapping L (x) is called the gradient or the Frechet derivative

L (x) = grad v̂ (x) . (5.2.6)

The gradient grad v̂ (x) of a vector-valued vector function (vector £eld) is a tensor-valued func-
tion (second order tensor depending on the vector of position x). For a scalar-valued vector
function or a scalar £eld α̂ (x) there exists an analogue to equation (5.2.4), resp. (5.2.5),
¡ ¢
α̂ (x + y) = α̂ (x) + l (x) · y + O y2 , and |y| → 0, (5.2.7)
α̂ (x + y) − α̂ (x)
l (x) · 1 = lim . (5.2.8)
|y|→0 |y|
And with this relation the gradient of the scalar £eld is a vector-valued vector function (vector
£eld) given by
l (x) = grad α̂ (x) . (5.2.9)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
144 Chapter 5. Vector and Tensor Analysis 5.2. Derivatives and Operators of Fields 145

Finally for a tensor-valued vector function or a tensor £eld T̂ (x) the relations analogue to equa- 5.2.4 The Derivatives of Base Vectors
tion (5.2.4), resp. (5.2.5), are given by
In section (5.1) the , resp. the partial derivatives of base vectors g i w.r.t. the coordinates Θk
3 3 ¡ ¢ were introduced by
2
T̂ (x + y) = T̂ (x) + L (x) · y + O y , and |y| → 0, (5.2.10) ∂gi
= gi,k = Γsik gs . (5.2.22)
3 T̂ (x + y) − T̂ (x) ∂Θk
L (x) 1 = lim . (5.2.11)
|y|→0 |y| With the Christoffel symbols de£ned by equations (5.1.60) and (5.1.61),

The gradient of the second order tensor £eld is a third order tensor-valued vector function (third Γ(i) = Γsij gs ⊗ gj , (5.2.23)
order tensor £eld) given by
3
L (x) = grad T̂ (x) . (5.2.12) the derivatives of base vectors are rewritten,

The gradient of a second order tensor grad (v ⊗ w) or grad T is a third order tensor, because of gi,k = Γ(i) · gk . (5.2.24)
(grad v) ⊗ w being a dyadic product of a second order tensor and a vector ("£rst order tensor")
is a third order tensor. For arbitrary scalar £elds α, β ∈ R, vector £elds v, w ∈ V, and tensor The de£nition of the gradient, equation (5.2.1), compared with equation (5.2.23) shows, that the
£eld T ∈ V ⊗ V the following identities hold, Christoffel symbols are computed by

Γ(i) = grad gi . (5.2.25)


grad (α β) = α grad β + β grad α, (5.2.13)
grad (αv) = v ⊗ grad α + α grad v, (5.2.14) Proof. The proof of this relation between the gradient of the base vectors and the Christoffel
grad (αT) = T ⊗ grad α + α grad T, (5.2.15) symbols,
grad (v · w) = (grad v)T · w + (grad w)T · v, (5.2.16) grad gi = Γ(i) = gi,j ⊗ gj , (5.2.26)
grad (v × w) = v × grad w + grad v × w, (5.2.17) is given by
grad (v ⊗ w) = [(grad v) ⊗ w] · grad w. (5.2.18) ¡ ¢
gi,k = Γ(i) · gk = gi,j ⊗ gj gk
¡ j ¢
It is important to notice, that the gradient of a vector of position is the identity tensor, = g · gk gi,j = δkj gi,j
gi,k = gi,k .
grad x = 1. (5.2.19)

5.2.3 Index Notation of Base Vectors Finally the gradient of a base vector is represented in index notation by

Most of the up now discussed relations hold for all n-dimensional Euclidean vector spaces E n , grad gi = gi,j ⊗ gj = Γsij gs ⊗ gj . (5.2.27)
but most uses are in continuum mechanics and in the 3-dimensional Euclidean vector space E 3 .
The scalar-valued, vector-valued or tensor-valued functions depend on a vector x ∈ V, e.g. a 5.2.5 The Covariant Derivative
vector of position at a point P . In the sections below the following basis are used, the curvilinear
coordinates Θi with the covariant base vectors, Let v = v̂ (x) = v i gi be a vector £eld, see equation (5.2.14), then the gradient of the vector £eld
is given by
∂x ¡ ¢
gi = = x,i , (5.2.20) grad v = grad v̂ (x) = grad v i gi = gi ⊗ grad v i + v i grad gi . (5.2.28)
∂Θi

and the Cartesian coordinates xi = xi with the orthonormal basis The gradient of a scalar-valued vector function α (x) is de£ned by

∂α i
ei = e i . (5.2.21) grad α = g = α,i gi , (5.2.29)
∂Θi
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
146 Chapter 5. Vector and Tensor Analysis 5.2. Derivatives and Operators of Fields 147

than the gradient of the contravariant coef£cients v i (x) in the £rst term of equation (5.2.28) is This de£nition implies another notation for the gradient in a 3-dimensional Cartesian basis of the
given by Euclidean vector space,
grad v i = v,k
i k
g . (5.2.30) grad α = ∇α. (5.2.39)
Equation (5.2.30) in (5.2.28), and together with equation (5.2.25) the complete gradient of a
vector £eld could be given by Let v = v̂ (x) be a vector £eld in a space with the Cartesian basis e i ,

i k
grad v = gi ⊗ v,k g + v i Γi , (5.2.31) v = v̂ (x) = v i ei = vi ei = vi ei , with vi = vi (x1 , x2 , x3 ) , (5.2.40)

and £nally with equation (5.2.23), than the gradient of the vector £eld in a Cartesian basis is given with the relation of equation
i k
(5.2.14) by
grad v = v,k gi ⊗g + v i Γsik gs ⊗g .k
(5.2.32)
grad v = grad v̂ (x) = grad (vi ei ) = ei ⊗ grad vi + vi grad ei . (5.2.41)
The dummy indices i and s are changed like this,
Computing the second term of equation (5.2.41) implies, that all derivatives of base vectors w.r.t.
i ⇒ s , and s ⇒ i. the vector of position x, see the de£nition (5.2.38), are equal to zero,
Rewritting equation (5.2.32), and factor out the dyadic product, implies ∂ (ei ) ∂ (ei ) ∂ (ei ) ∂ (· · · )
¡ i ¢¡ ¢ grad ei = (ei ),j ej = e1 + e2 + e3 = ei = 0. (5.2.42)
grad v = v,k + v s Γisk gi ⊗ gk . (5.2.33) ∂x1 ∂x2 ∂x3 ∂xi

The term Than equation (5.2.41) simpli£es to


v i |k = v,k
i
+ v s Γisk , (5.2.34)
is called the covariant derivative of the coef£cient v i w.r.t. the coordinate Θk and the basis gi . grad v = ei ⊗ grad vi + 0 = ei ⊗ vi,k ek , (5.2.43)
Than the gradient of a vector £eld is given by
¡ ¢ and £nally the gradient of a vector £eld in a 3-dimensional Cartesian basis of the Euclidean
grad v = v i |k gi ⊗ gk , with v i |k = v,ki
+ v s Γisk . (5.2.35) vector space is given by
grad v = vi,k (ei ⊗ ek ) . (5.2.44)
5.2.6 The Gradient in a 3-dim. Cartesian Basis of Euclidean Space
Let α be a scalar £eld in a space with the Cartesian basis e i , 5.2.7 Divergence of Vector and Tensor Fields
α = α̂ (x) , with x = x i ei = x i e i = x i e i , (5.2.36)
The divergence of a vector £eld is de£ned by
than the gradient of the scalar £eld α in a Cartesian basis is given by
div v = tr (grad v) = grad v : 1, (5.2.45)
∂α
grad α = α,i ei , and α,i = . (5.2.37)
∂xi and must be a scalar quantity, because the gradient of a vector is a second order tensor, and the
The nabla operator is introduced by trace of a second order de£nes a scalar quantity. The divergence of a tensor £eld is de£ned by

∂ (. . .) ∂ (. . .) ∂ (. . .)
∇ = (. . .),i ei = e1 + e2 + e3 , div T = grad (T) : 1 , and ∀T ∈ V ⊗ V (5.2.46)
∂x1 ∂x2 ∂x3
and £nally de£ned by and must be a vector-valued quantity, because the scalar product of the second order unit tensor
1 and the third order tensor grad (T) is a vector-valued quantity . Another possible de£nition is
∂ (· · · ) given by
∇= ei . (5.2.38) ¡ ¢ ¡ ¢
∂xi a · div T = div TT a = grad TT a : 1. (5.2.47)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
148 Chapter 5. Vector and Tensor Analysis 5.2. Derivatives and Operators of Fields 149

For an arbitrary scalar £eld α ∈ R, and arbitrary vector £elds v, w ∈ V, and arbitrary tensor The divergence of a second order tensor is a vector, see also equation (5.2.52),
£elds S, T ∈ V ⊗ V the following identities hold, ¡ ¢
div T = div T ik gi ⊗ gk
£ ¡ ik ¢¤
div (αv) = v · grad α + α div v, (5.2.48) = grad T gi gk + [div gk ] T ik gi , (5.2.61)
div (αT) = T grad α + α div T, (5.2.49) and with equation (5.2.14),
div (grad v)T = grad (div v) , (5.2.50) £ ¤
div T = gi ⊗ grad T ik + T ik grad gi gk + grad gk · 1T ik gi
div (v × w) = (grad v × w) : 1 − (grad w × v) : 1 (5.2.51) £ ¤ ¡ ¢ ¡ ¢
= gi ⊗ T,jik gj + T ik Γsij gs ⊗ gj gk + Γskj gs ⊗ gj : δlr gl ⊗ gr T ik gi
= w · rot v − v · rot w,
= gi T,jik δkj + T ik Γsij gs δkj + Γskj δlr δsl δjr T ik gi
div (v ⊗ w) = (grad v) w + (div w) v, (5.2.52)
¡ ¢
div (Tv) = div TT · v + TT : grad v, (5.2.53) = T,kik gi + T ik Γsik gs + Γjkj T ik gi ,
div (v × T) = v × div T + grad v × T, (5.2.54) and £nally after renaming the dummy indices,
div (TS) = (grad T) S + T div S. (5.2.55) ¡ ¢
div T = T,lkl + T lm Γklm + T km Γlml gk . (5.2.62)
The term T kl |l de£ned by
5.2.8 Index Notation of the Divergence of Vector Fields
T kl |l = T,lkl + T lm Γklm + T km Γlml , (5.2.63)
Let v = v̂ (x) = v i gi ∈ V be a vector £eld, with v i = v̂ i (Θ1 , Θ2 , Θ3 , . . . , Θn ), than a basis is
given by is the so called covariant derivative of the tensor coef£cients w.r.t. the coordinates Θ l , than the
∂x divergence is given by
gi = x,i = . (5.2.56) div T = T kl |l gk . (5.2.64)
∂Θi
The de£niton of the divergence (5.2.45) of a vector £eld with using the index notation of the Other representations are possible, e.g. a mixed formulation is given by
gradient (5.2.35) implies div T = T kl |k gl , (5.2.65)
£ ¡ ¢¤
div v = grad v : 1 = v i |k gi ⊗ gk : [δsr (gr ⊗ gs )] and with the covariant derivative T kl |k ,
¡ l ¢
=v i
|k δsr gir g ks i
= v |k gis g ks
=v i
|k δik , T kl |k = Tl,k − Tnk Γnlk + Tln Γkkn (5.2.66)
div v = v i |i . (5.2.57)
5.2.10 The Divergence in a 3-dim. Cartesian Basis in Euclidean Space
The divergence of a vector £eld is a scalar quantity and an invariant.
A vector £eld v in a Cartesian basis e i ∈ E3 is represented by equation (5.2.40) and its gradient
by equation (5.2.41). The divergence de£ned by (5.2.45) rewritten with using the de£nition of
5.2.9 Index Notation of the Divergence of Tensor Fields the gradient of a vector £eld in a Cartesian basis (5.2.44) is given by
Let T be a tensor, given by div v = grad v : 1 = (vi,k ei ⊗ ek ) : (δrs er ⊗ es )
= vi,k δrs δir δks = vi,k δik
T = T̂ (x) = T ik (gi ⊗ gk ) ∈ V ⊗ V, (5.2.58)
div v = vi,i
and The divergence of a vector £eld in the 3-dimensional Euclidean space with a Cartesian basis E 3
¡ ¢ is a scalar invariant, and is given by
T ik = T̂ ik Θ1 , Θ2 , Θ3 , . . . , Θn , (5.2.59) div v = vi,i , (5.2.67)
or in its complete description by
and with equation (5.2.47) the divergence of this tensor is given by
∂v1 ∂v2 ∂v3
div v = + + . (5.2.68)
div T = grad T : 1 , and 1 = δsr gr ⊗ gs . (5.2.60) ∂x1 ∂x2 ∂x3

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
150 Chapter 5. Vector and Tensor Analysis 5.2. Derivatives and Operators of Fields 151

5.2.11 Rotation or Curl of Vector and Tensor Fields and ∆v is a vector-valued quantity. The de£nition of the laplacian of a tensor £eld ∆T is given
3 by
The rotation of a vector £eld v (x) is de£ned with the fundamental tensor E by
∆T = (grad grad T) 1, (5.2.86)
3
rot v = E (grad v)T . (5.2.69) and ∆T is a tensor-valued quantity. For an arbitrary vector £eld v ∈ V, and an arbitrary tensor
In English textbooks in most cases the curl operator curl instead of the rotation operator rot is £eld T ∈ V ⊗ V the following identities hold,
used, h i
rot v = curl v. (5.2.70) div grad v ± (grad v)T = ∆v ± grad div v, (5.2.87)

The rotation, resp. curl, of a vector £eld rot v (x) or curl v (x) is a unique vector £eld. Some- rot rot v = grad div v − ∆v, (5.2.88)
times another de£nition of the rotation of a vector £eld is given by ∆ tr T = tr ∆T (5.2.89)
rot v = div (1 × v) = 1 × grad v. (5.2.71) rot rot T = −∆T + grad div T + (grad div T)T ,
− grad grad tr T + 1 [∆ (tr T) − div div T] . (5.2.90)
For an arbitrary scalar £eld α ∈ R, and arbitrary vector £elds v, w ∈ V, and an arbitrary tensor
£eld T ∈ V ⊗ V the following identities hold, Finally, if the tensor £eld T is symmetric and de£ned by T = S − 1 tr S, with the symmetric
part given by S, then the following identity holds,
rot grad α = 0, (5.2.72)
div rot v = 0, (5.2.73) rot rot T = −∆S + grad div S + (grad div S)T − 1 div div S. (5.2.91)
rot grad v = 0, (5.2.74)
rot (grad v)T = grad rot v, (5.2.75)
rot (αv) = α rot v + grad α × v, (5.2.76)
rot (v × w) = v div w − grad wv − w div v + grad vw
= div (v ⊗ w − w ⊗ v) , (5.2.77)
div rot T = rot div TT , (5.2.78)
T
div (rot T) = 0, (5.2.79)
(rot rot T)T = rot rot TT , (5.2.80)
T
rot (α1) = − [rot (α1)] , (5.2.81)
rot (Tv) = rot TT v + (grad v)T × T. (5.2.82)
Also important to notice is that, if the tensor £eld T is symmetric, then the following identity
holds,
rot T : 1 = 0. (5.2.83)

5.2.12 Laplacian of a Field


The laplacian of a scalar £eld ∆α or the Laplace operator of a scalar £eld is de£ned by
∆α = grad grad α : 1. (5.2.84)
The laplacian of a scalar £eld ∆α is a scalar quantity. The laplacian of a vector £eld ∆v is
de£ned by
∆v = (grad grad v) 1, (5.2.85)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
152 Chapter 5. Vector and Tensor Analysis 5.3. Integral Theorems 153

5.3 Integral Theorems resp.


√ √
5.3.1 De£nitions dV = gg3 · g3 dΘ1 dΘ2 dΘ3 = gdΘ1 dΘ2 dΘ3 . (5.3.4)

The surface integral of a tensor product of a vector £eld u (x) and an area vector da should Let ũi be de£ned as the mean value of the vector £eld u in the area element i, for example for
be transformed into a volume integral. The volume element dV with the surface dA is given at i = 1 a Taylor series is derived like this,
a point P by the position vector x in the Euclidean space E3 . The surface dA of the volume
∂u dΘ2 ∂u dΘ3
element dV is described by the six surface elements represented by the aera vectors da 1 , . . ., ũ1 = u + 2
+ . (5.3.5)
and da6 . ∂Θ 2 ∂Θ3 2
The Taylor series for the area element i = 4 is given by
3 2 ∂ ũ1 1
Θ Θ
ũ4 = ũ1 + 1
dΘ ,
∂Θh i
∂u dΘ2 ∂u dΘ3
dA ∂ u + ∂Θ 2 2 + ∂Θ3 2
: ũ4 = ũ1 + dΘ1 ,
6 ∂Θ1
5 ũ4 =
4 and £nally
y ∂ ũ1
ũ1 + ∂Θ 1 dΘ
1

ũ1 dV
± u ∂ 2 u dΘ2 1 ∂ 2 u dΘ3 1
Θ1 ũ4 = ũ1 + dΘ1 + dΘ + dΘ . (5.3.6)
1 ∂Θ 1 1 2
∂Θ ∂Θ 2 ∂Θ1 ∂Θ3 2
da1 3 2
) 3 : Considering only the linear terms for i = 4, 5, 6 implies
dΘ2 g2
3
dΘ g3 dΘ1 g1 ∂u
ũ4 = ũ1 + dΘ1 , (5.3.7)
∂Θ1
3
∂u
e3 6 P ũ5 = ũ2 + dΘ2 , (5.3.8)
x ∂Θ2

- s
u and
O e2 ∂u
ª
e1 ũ6 = ũ3 + dΘ3 . (5.3.9)
∂Θ3
Figure 5.6: The volume element dV with the surface dA. The surface integral is approximated by the sum of the six area elements,
Z 6
X
√ u ⊗ da = ũi ⊗ dai . (5.3.10)
da1 = dΘ2 dΘ3 g3 × g2 = −dΘ2 dΘ3 gg1 = −da4 , (5.3.1) dA
i=1
3√
da2 = dΘ dΘ g1 × g3 = −dΘ dΘ gg2 = −da5 ,
1 3 1
(5.3.2)
This equation is rewritten with all six terms,
and 6
X
1 2 1 2√ 3 ũi ⊗ dai = ũ1 ⊗ da1 + ũ2 ⊗ da2 + ũ3 ⊗ da3 + ũ4 ⊗ da4 + ũ5 ⊗ da5 + ũ6 ⊗ da6 ,
da3 = dΘ dΘ g2 × g1 = −dΘ dΘ gg = −da6 . (5.3.3)
i=1
The volume of the volume element dV is given by the scalar triple product of the tangential
vectors gi associated to the curvilinear coordinates Θi at the point P , inserting equations (5.3.1)-(5.3.3)

dV = (g1 × g2 ) · g3 dΘ1 dΘ2 dΘ3 , = (ũ1 − ũ4 ) ⊗ da1 + (ũ2 − ũ5 ) ⊗ da2 + (ũ3 − ũ6 ) ⊗ da3 ,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
154 Chapter 5. Vector and Tensor Analysis 5.3. Integral Theorems 155

and £nally with equations (5.3.7)-(5.3.9), volumes are summed over, then the dyadic products of the inner surfaces vanish, because every
dyadic product appears twice. Every area vector da appears once with the normal direction n
6 and once with the opposite direction −n. The vector £eld u is by de£nition continuous, i.e. for
X ∂u ∂u ∂u
ũi ⊗ dai = − dΘ1 ⊗ da1 − dΘ2 ⊗ da2 − dΘ3 ⊗ da3 . (5.3.11) each of the two sides of an inner surface the value of the vector £eld is equal. In order to solve
i=1
∂Θ1 ∂Θ2 ∂Θ3
the whole problem it is only necessary to sum (to integrate) the whole outer surface dA with the
Equations (5.3.1)-(5.3.3) inserted in (5.3.11) implies normal unit vector n. If the summation over all subvolumes dV with the surfaces da is rewritten
as an integral, like in equation (5.3.13), then for the whole volume and surface the following
6 µ ¶ relation holds,
∂u ∂u ∂u 3 √
X nV Z
1 2
gdΘ1 dΘ2 dΘ3 ,
Z Z
ũi ⊗ dai = ⊗ g + ⊗ g + ⊗ g X
i=1
∂Θ1 ∂Θ2 ∂Θ3 u ⊗ da = u ⊗ da = grad u dV , (5.3.14)
i=1 dA A V
with the summation convention i = 1, . . . , 3, and with da = dan
nV Z
X Z Z
µ ¶
∂u i u ⊗ n da = u ⊗ n da = grad u dV . (5.3.15)
= ⊗ g dV ,
∂Θi i=1
dA A V

With this integral theorems it would be easy to develop integral theorems for scalar £elds, vector
and £nally with the de£nition of the gradient
£elds and tensor £elds.
6
X
ũi ⊗ dai = grad u dV . (5.3.12) 5.3.2 Gauss’s Theorem for a Vector Field
i=1
The Gauss’s theorem is de£ned by
Comparing this result with equation (5.3.10) yields Z Z
u · n da = div u dV . (5.3.16)
Z 6 µ ¶
X ∂u A V
u ⊗ da = ũi ⊗ dai = i
⊗ gi dV = grad u dV . (5.3.13)
i=1
∂Θ Proof. Equation (5.3.15) is multiplied scalar with the unit tensor 1 from the left-hand side,
dA Z Z
Equation (5.3.13) holds for every subvolume dV with the surface dA. If the terms of the sub- 1 : u ⊗ n da = 1 : grad u dV , (5.3.17)
A V

with the mixed formulation of the unit tensor 1 = gj ⊗ gj ,


¡ ¢ ¡ ¢
1 : u ⊗ n = g j ⊗ g j : uk g k ⊗ n i g i
= uk ni gjk δij = uj nj ,
1 : u ⊗ n = u · n, (5.3.18)
and the scalar product of the unit tensor and the gradient of the vector £eld
1 : grad u = tr (grad u) ,
1 : grad u = div u, (5.3.19)
Finally inserting equations (5.3.18) and (5.3.19) in (5.3.17) implies
Z Z Z Z
1 : u ⊗ n da = u · n da = 1 : grad u dV = div u dV . (5.3.20)
Figure 5.7: The Volume, the surface and the subvolumes of a body. A A V V

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
156 Chapter 5. Vector and Tensor Analysis 5.3. Integral Theorems 157

5.3.3 Divergence Theorem for a Tensor Field It is possible to use Gauss’s theorem (5.3.16), because αa is a vector, this implies
Z Z
The divergence theorem is de£ned by
Z Z a · αn da = div (αa) dV . (5.3.24)
A A
T · n da = div T dV . (5.3.21)
A V Using the identity (5.2.48) and the vector a being constant yields
Proof. If the vector a is constant, then this implies div (αa) = a grad α + α div a = a grad α + 0 = a grad α. (5.3.25)
Z Z Z Z
¡ T ¢
a · T · n da = a · T · n da = n · TT · a da = T · a · n da. (5.3.22) Inserting relation (5.3.25) in equation (5.3.24) implies
A A A A Z Z
a · αn da = a · grad α dV ,
With TT · a = u being a vector it is possible to use Gauss’s theorem (5.3.16), this implies
A V
Z Z  
¡ T ¢ Z Z
a · T · n da = T · a · n da
a ·  αn da − grad α dV  = 0,
A
ZA A V
¡ ¢
= div TT · a dV ,
and £nally the identity
V
Z Z
with equation (5.2.47) and the vector a being constant,
αn da = grad α dV .
¡ ¢
div TT a = (div T) a + T grad a = div Ta + 0 = a div T, A V
 
Z Z
a ·  T · n da − div T dV  = 0,
A V
5.3.5 Integral of a Cross Product or Stokes’s Theorem
and £nally The Stoke’s theorem for the cross product of a vector £eld u and its normal vector n is de£ned
Z Z by Z Z
T · n da = div T dV . n × u da = rot u dV .. (5.3.26)
A V
A V

Proof. Let the vector a be constant,


Z Z Z
5.3.4 Integral Theorem for a Scalar Field a · n × u da = a · (n × u) da = (u × a) · n da,
A A V
The integral theorem for a scalar £eld α is de£ned by
Z Z Z with the cross product u × a being a vector it is possible to use the Gauss’s theorem (5.3.16)
α da = αn da = grad α dV . (5.3.23) Z Z
A A V a · n × u da = div (u × a) da. (5.3.27)
Proof. If the vector a is constant, then the following condition holds A V

Z Z The identity (5.2.51) with the vector a being constant implies


a · αn da = αa · n da.
A A
div (u × a) = a rot u − u rot a = a rot u − 0 = a rot u, (5.3.28)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
158 Chapter 5. Vector and Tensor Analysis

inserting relation (5.3.28) in equation (5.3.27) yields


Z Z
a · n × u da = a rot u dV ,
A V

and £nally
Z Z Chapter 6
n × u da = rot u dV .
A V

Exercises
5.3.6 Another Interpretation of Gauss’s Theorem
The Gauss’s theorem, see equation (5.3.16), could be established by inverting the de£nition of
the divergence. Let u (x) be a continuous and differentiable vector £eld. The volume integral is
approximated by a limit, see also (5.3.10), where the whole surface is approximated by surface
elements dai , Z X
u (x) dV = lim ũi ∆Vi . (5.3.29)
∆Vi →0
V i

Let ũi be the mean value in a subvolume ∆Vi . The volume integral of the divergence of the
vector £eld u with inserting the relation of equation (5.3.29) is given by
Z X³ ´
div u dV = lim ^
div ui ∆Vi . (5.3.30)
∆Vi →0
V i

The divergence (source density) is de£ned by


R 
u · da
∆a
div u = lim  . (5.3.31)
∆V →0 ∆V

The mean value ũi in equation (5.3.30) is replaced by the identity of equation (5.3.31),
R
³ ´ ui · da
^
div ui =
∆ai
,
∆Vi R
u · da
Z X ∆ai i
div u dV = lim .
∆Vi →0
i
∆Vi
V

and £nally with the suammation of all subvolumes, like at the begining of this section,
Z Z Z
div u dV = u · da = u · nda. (5.3.32)
V A A

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 159
160 Chapter 6. Exercises Chapter Table of Contents 161

6.5.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196


Chapter Table of Contents 6.6 The Moving Trihedron, Derivatives and Space Curves . . . . . . . . . . . . 198
6.6.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6.1 Application of Matrix Calculus on Bars and Plane Trusses . . . . . . . . . 162
6.6.2 The Base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
6.1.1 A Simple Statically Determinate Plane Truss . . . . . . . . . . . . . . 162
6.6.3 The Curvature and the Torsion . . . . . . . . . . . . . . . . . . . . . . 201
6.1.2 A Simple Statically Indeterminate Plane Truss . . . . . . . . . . . . . 164
6.6.4 The Christoffel Symbols . . . . . . . . . . . . . . . . . . . . . . . . . 202
6.1.3 Basisc Relations for bars in a Local Coordinate System . . . . . . . . . 165
6.6.5 Forces and Moments at an Arbitrary sectional area . . . . . . . . . . . 203
6.1.4 Basic Relations for bars in a Global Coordinate System . . . . . . . . . 167
6.6.6 Forces and Moments for the Given Load . . . . . . . . . . . . . . . . . 207
6.1.5 Assembling the Global Stiffness Matrix . . . . . . . . . . . . . . . . . 168
6.7 Tensors, Stresses and Cylindrical Coordinates . . . . . . . . . . . . . . . . 210
6.1.6 Computing the Displacements . . . . . . . . . . . . . . . . . . . . . . 170
6.7.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
6.1.7 Computing the Forces in the bars . . . . . . . . . . . . . . . . . . . . 171
6.7.2 Co- and Contravariant Base Vectors . . . . . . . . . . . . . . . . . . . 212
6.1.8 The Principle of Virtual Work . . . . . . . . . . . . . . . . . . . . . . 172
6.7.3 Coef£cients of the Various Stress Tensors . . . . . . . . . . . . . . . . 213
6.2 Calculating a Structure with the Eigenvalue Problem . . . . . . . . . . . . 174
6.7.4 Physical Components of the Contravariant Stress Tensor . . . . . . . . 215
6.2.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.7.5 Invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6.2.2 The Equilibrium Conditions after the Excursion . . . . . . . . . . . . . 175
6.7.6 Principal Stress and Principal Directions . . . . . . . . . . . . . . . . . 221
6.2.3 Transformation into a Special Eigenvalue Problem . . . . . . . . . . . 177
6.7.7 Deformation Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
6.2.4 Solving the Special Eigenvalue Problem . . . . . . . . . . . . . . . . . 178
6.7.8 Normal and Shear Stress . . . . . . . . . . . . . . . . . . . . . . . . . 224
6.2.5 Orthogonal Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.2.6 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.3 Fundamentals of Tensors in Index Notation . . . . . . . . . . . . . . . . . . 182
6.3.1 The Coef£cient Matrices of Tensors . . . . . . . . . . . . . . . . . . . 182
6.3.2 The Kronecker Delta and the Trace of a Matrix . . . . . . . . . . . . . 183
6.3.3 Raising and Lowering of an Index . . . . . . . . . . . . . . . . . . . . 184
6.3.4 Permutation Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
6.3.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.4 Various Products of Second Order Tensors . . . . . . . . . . . . . . . . . . 190
6.4.1 The Product of a Second Order Tensor and a Vector . . . . . . . . . . . 190
6.4.2 The Tensor Product of Two Second Order Tensors . . . . . . . . . . . 190
6.4.3 The Scalar Product of Two Second Order Tensors . . . . . . . . . . . . 190
6.4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
6.4.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
6.5 Deformation Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6.5.1 Tensors of the Tangent Mappings . . . . . . . . . . . . . . . . . . . . 194
6.5.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
162 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 163

6.1 Application of Matrix Calculus on Bars and Plane Trusses see also free-body diagram (6.2). This relations imply
F
6.1.1 A Simple Statically Determinate Plane Truss S1 = S 2 , and S1 + S2 = , (6.1.3)
cos α
A very simple truss formed by two bars is given like in £gure (6.1), and loaded by an arbitrary
force F in negative y-direction. The discrete values of the various quantities, like the Young’s and £nally

F
S1 = S 2 = . (6.1.4)
2 cos α
3 1 The equilibrium conditions of forces at the node 1 are given in horizontal direction by
6F3y 6F1y
bar II, l, A2 , E2 α α bar I, l, A1 , E1

y 3 -
1 -
2 y F3x
6 α α F1x
6
F
? S2 R
- α = 45o ª S1
x -
x

Figure 6.1: A simple statically determinate plane truss.


Figure 6.3: Free-body diagrams for the nodes 1 and 3.

mdoulus Ei or the sectional area Ai , are of no further interest at the moment. Only the forces in
direction of the bars are to be computed. The equilibrium conditions of forces at the node 2 are X
FH = 0 = F1x − S1 cos α, (6.1.5)
I µ

S2 α α S1 and in vertical direction by


y X
FV = 0 = F1y − S1 sin α, (6.1.6)
6
F see also the right-hand side of the free-body diagram (6.3). The £rst one of this two relations
?
- yields with equation (6.1.4)
x
F cos α 1
F1x = S1 cos α = , and £nally F 1x = F , (6.1.7)
Figure 6.2: Free-body diagram for the node 2. 2 cos α 2

and the second one implies


given in horizontal direction by
X F sin α 1
FH = 0 = −S2 sin α + S1 sin α, (6.1.1) F1y = S1 sin α = , and £nally F 1y = F . (6.1.8)
2 cos α 2
and in vertical direction by The equilibrium conditions of forces at the node 3 are given in horizontal direction by
X X
FV = 0 = −F + S2 cos α + S1 cos α, (6.1.2) FH = 0 = F3x − S2 cos α, (6.1.9)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
164 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 165

and in vertical direction by


6Fy = 2kN
X
FV = 0 = F3y − S2 sin α, (6.1.10)

see also the left-hand side of the free-body diagram (6.3). The £rst one of this two relations SII 4 -
yields with equation (6.1.4) 6 α2
F2y x̃2 Á Fx = 10kN
α3
F cos α 1 y Á SI
F3x = −S2 cos α = − , and £nally F 3x = − F , (6.1.11) + SIII
2 cos α 2 α2 À
6 2 - SII
?
and the second one implies F2x
-
F sin α 1 x
F3y = S2 sin α = , and £nally F 3y = F . (6.1.12)
2 cos α 2
The stresses in normal direction of the bars and the nodal displacements could be computed Figure 6.5: Free-body diagrams for the nodes 2 and 4.
with the relations described in one of the following sections, see also equations (6.1.17)-(6.1.19).
The important thing to notice here is, that there are overall 6 equations to solve, in order to
compute 6 unknown quantities F1x , F1y , F3x , F3y , S1 , and S2 . This is characteristic for a statically nodes! In order to get enough equations for computing all unknown quantities, it is necessary
determinate truss, resp. system, and the other case is discussed in the following section. to use additional equations, like the ones given in equations (6.1.17)-(6.1.19). For example the
equilibrium condition of horizontal forces at node 4 is given by
X
6.1.2 A Simple Statically Indeterminate Plane Truss FH = 0 = Fx − SII cos α2 − SI cos α1 , (6.1.13)

6 and in vertical direction by


Fy = 2kN X
(EA)i = const. axial rigidity 4 FV = 0 = Fy − SIII − SII sin α2 − SI sin α1 . (6.1.14)
-
6
α1 = 30◦ Fx = 10kN The moment equilibrium condition at this point is of no use, because all lines of action cross the
◦ node 4 itself! The equilibrium conditions for every support contain for every node 1-3 two un-
α2 = 60
known reactive forces, one horizontal and one vertical, and one also unknown force in direction
α3 = 90◦ I of the bar, e.g. for the node 2,
III
h = 5, 0m X
II h FH = 0 = F2x + SII cos α2 , (6.1.15)
x̃1 x̃2 x̃3
6 and
y 3 Á
X
6 1 α1 2 α2 3 α3 FV = 0 = F2y + SII sin α2 . (6.1.16)
?

Finally summarizing all possible and useful equations and all unknown quantities implies, that
-
x there are overall 9 unknown quantities but only 8 equations! This result implies, that it is neces-
sary to take another way to solve this problem, than using the equilibrium conditions of forces!
Figure 6.4: A simple statically indeterminate plane truss.
6.1.3 Basisc Relations for bars in a Local Coordinate System
The truss given in the sketch in £gure (6.4) is statically indeterminate, i.e. it is impossible to An arbitrary bar with its local coordinate system x̃, ỹ is given in £gure (6.6), with the nodal forces
compute all the reactive forces just with the equilibrium conditions for the forces at the different fix̃ , and fiỹ at a point i with i = 1, 2 in the local coordiante system. The following relations hold

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
166 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 167

The relations between stresses and displacements, see equation (6.1.17), are given by
µ
EA
x̃ S = Aσx̃ = ∆ux̃ , (6.1.23)
I f2ỹ l
µ
2 S inserting equation (6.1.23) in (6.1.21) implies
µ
I ỹ f2x̃
EA T
f̃ = q q u, (6.1.24)
l
bar, with A, E, l
I f1ỹ and £nally resumed as the symmetric local stiffness matrix,
1  
µ 1 0 −1 0
f1x̃ EA 
0 0 0 0
f̃ = K̃ u , with K̃ = . (6.1.25)
l −1 0 1 0
ª S 0 0 0 0

Figure 6.6: An arbitrary bar and its local coordinate system x̃, ỹ.
6.1.4 Basic Relations for bars in a Global Coordinate System
An arbitrary bar with its local coordinate system x̃, ỹ and a global coordinate system x, y is given
in £gure (6.7). At a point i with i = 1, 2 the nodal forces f ix̃ , fiỹ , and the nodal displacements uix̃ ,
in the local coordinate system x̃, ỹ of an arbitrary single bar, see £gure (6.6), uiỹ are de£ned in the local coordiante system. It is also possible to de£ne the nodal forces f ix , fiy ,
and the nodal displacements vix , viy in the global coordiante system. In order to combine more
S
stresses σx̃ = , (6.1.17)
A
dux̃ ∆ux̃ I ỹ
kinematics εx̃ = = , (6.1.18) µ
dx̃ l x̃
material law σx̃ = Eεx̃ . (6.1.19) 6f1y
I u1ỹ
In order to consider the additional relations given above, it is useful to combine the nodal dis- 2
placements and the nodal forces for the two nodes of the bar in the local coordinate system. The
6v1y
nodal displacements in the local coordinate system x̃ are given by I f1ỹ α

∆ux̃ = q T u = −u1x̃ + u2x̃ , (6.1.20)


- -
v1x f1x
and the nodal forces are given by the equilibrium conditions of forces at the nodes of the bar, µ 1
y, vy
f1x̃
f̃ = q S, (6.1.21) 6
µ
u1x̃
with - x, v
x
     
−1 u1x̃ f1x̃
0 u1ỹ  f1ỹ  Figure 6.7: An arbitrary bar in a global coordinate system.
q= 
 , u= 
 , and f̃ =  
 . (6.1.22)
1 u2x̃  f2x̃ 
0 u2ỹ f2ỹ than one bar, it is necessary to transform the quantities given in each local coordinate system into

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
168 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 169

one global coordinate system. This transformation of the local vector quantities, like the nodal stiffness matrix K is assembled by summarizing relations like in equation (6.1.28)-(6.1.31) for
load vector f̃ , or the nodal displacement vector u, is given by a multiplication with the so called all used elements I-III, and all given boundary conditions, like the ones given by the supports
∗ at the nodes 1-3,
transformation matrix Q, which is an orthogonal matrix, given by
· ¸ v1x = v2x = v3x = 0, (6.1.32)
∗ ∗ cos α sin α
ã = Q a , with Q= . (6.1.26) v1y = v2y = v3y = 0. (6.1.33)
− sin α cos α
With this conditions the rigid body movement is eliminated from the system of equations, resp.
Because it is necessary to transform the local quantities for more than one node, the transforma- the assembled global stiffness matrix, than only the unknown displacements at node 4 remain,

tion matrix Q is composed by one submatrix Q for every node, v4x , and v4y . (6.1.34)
 
∗  cos α sin α 0 0 This implies, that it is suf£cient to determine the equilibrium conditions only at node 4. The

Q 0  − sin α cos α 0 0 
Q= ∗ =
 0
. (6.1.27) result is, that it is suf£cient to consider only one submatrix K i for every element I-III. For
0 Q 0 cos α sin α
example the complete equilibrium conditions for bar III are given by
0 0 − sin α cos α      
f3x " # v3x 0
The local quantities f̃ , and u in equation (6.1.25) are replaced by the following expressions, f3y  . . . . . . v3y  0
  = ∗   =  . (6.1.35)
f4x 
. . . K 3 v4x
  v4x 
 
v1x f4y III v4y III v4y III
v1y 
u=Qv , with v = 
v2x  ,
 (6.1.28) Finally the equilibrium conditions at node 4 with summarizing the bars i = I-III is given by
v2y 3 · i ¸ · ¸ X 3 h i ·v ¸
X f4x F ∗
  i = x = K i
4x
, (6.1.36)
f1x f4y Fy v4y
i=1 i=1
f1y 
f̃ = Q f , with f =  
 . (6.1.29)
f2x  and in matrix notation given by
f2y
P = K v, (6.1.37)
Inserting this relations in equation (6.1.25) and multiplying with the inverse of the transformation
with the so called compatibility conditions at node 4 given by
matrix from the left-hand side yields in the global coordiante system,
I II III
v4x = v4x = v4x , (6.1.38)
f = Q−1 K̃ Q v = K v, (6.1.30) I II III
v4y = v4y = v4y . (6.1.39)
with the symmetric local stiffness matrix given in the global coordinate system by By using this way of assembling the reduced global stiffness matrix the boundary conditions are
  already implemented in every element stiffness matrix. The second way to assemble the reduced
cos2 α sin α cos α − cos2 α − sin α cos α global stiffness matrix starts with the unreduced global stiffness matrix given by
2
EA  sin α cos α
 sin α − sin α cos α − sin2 α   i     
K= . (6.1.31) f1x v1x 0
l  − cos2 α − sin α cos α cos2 α sin α cos α  i 
f1y
 v1y   0 
   
− sin α cos α − sin2 α sin α cos α sin2 α  i  
f2x  K 11 0 0 K 14  v2x   0 
  
3  i 
X f2y   0 K 22 0 K 24  v2y   0 
 i =   =  , (6.1.40)
6.1.5 Assembling the Global Stiffness Matrix f   0 0 K 33 K 34   v3x   0 
  
i=1  3x 
f i  K 41 K 42 K 43 K v3y   0 
   
There are two different ways of assembling the global stiffness matrix. The £rst way considers  3y 
i 
f4x v4x  Fx 
the boundary conditions at the beginning, the second one considers the boundary conditions not i
until the complete global stiffness matrix for all nodes is assembled. In the £rst way the global f4y v4y Fy

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
170 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 171

and in matrix notation given by 6.1.7 Computing the Forces in the bars
K v = P. (6.1.41) The forces in the various bars are computed by solving the relations given by the equation
(6.1.28)-(6.1.31) for each element i, resp. for each bar,
Each element resp. each bar could be described by a combination i, j of the numbers of nodes
used in this special element. For example for the bar (element) III the submatrices K 33 , K 34 , f i = K ivi. (6.1.49)
K 43 , and a part of K, like in equation (6.1.36) are implemented in the unreduced global stiffness For example for the bar III, with α = 90 ◦
, the symmetric local stiffness matrix and the nodal
matrix. After inserting the submatrices K ij for the various elements and considering the bound- displacements are given by
ary conditions given by equations (6.1.32)-(6.1.33) the reduced global stiffness matrix is given    
0 0 0 0 0
by
3 EA  0 1 0 −1  0
K =  , v3 = 
v4x  ,
 (6.1.50)
P = K v, (6.1.42) h 0 0 0 0
0 −1 0 1 v4y
resp.
with sin 90◦ = 1, and cos 90◦ = 0 in equation (6.1.31). The forces in the bars are given in the
¡ √ ¢·
·
Fx
¸ ¸· ¸
EA 3 + 3 1 1 v4x global coordinate system, see equation (6.1.30), by
= , (6.1.43)      
Fy 8h 1 3 v4y 0 0 f3x
4 8
 =  6, 762  = f3y  ,
   
see also equation (6.1.36) and (6.1.37) of the £rst way. But this computed reduced global stiff- f 3 = K 3v3 = √  (6.1.51)
ness matrix is not the desired result, because the nodal displacements and forces are the desired 3+ 3 0   0  f4x 
−8 III −6, 762 III f4y III
quantities.
and in the local coordinate system associated to the bar III, see equation (6.1.29), by
     
6.1.6 Computing the Displacements f3x̃ f3y 6, 762
3  f3ỹ  −f3x   0 
   
The nodal displacements are computed by inverting the relation (6.1.43), f̃ = Q3 f 3 = 
f4x̃  =  f4y  = −6, 762 .
 (6.1.52)

v=K −1
P. (6.1.44) f4ỹ −f4x 0

The inversion of a 2 × 2-matrix is trivial and given by Comparing this result with the relation (6.1.21) implies £nally the force S III in direction of the
¡ √ ¢· ¸ bar,
1 (8h)2 EA 3 + 3 3 −1 SIII = −f3x̃ = f4x̃ = −6, 762kN , (6.1.53)
K −1 = K̂ = ¡ √ ¢2 , (6.1.45)
det K 2 3 + 3 (EA)2 8h −1 1 and for the bars I and II,

and £nally the inverse of the stiffness matrix is given by SI = 8, 56kN , and SII = 5, 17kN . (6.1.54)
· ¸ Comparing this results as a probe with the equilibirum conditions given by the equations (6.1.13)-
4h 3 −1
K −1 = ¡ √ ¢ . (6.1.46) (6.1.14), in horizontal direction,
EA 3 + 3 −1 1 X
FH = 0 = Fx − SII cos α2 − SI cos α1
The load vector P at the node 4, see £gure (6.4), is given by √
1 3
+ 6, 762 · 1 = −4, 7 · 10−3 ≈ 0,
· ¸
10 = 2, 0 − 8, 56 · − 5, 17 · (6.1.55)
P = , (6.1.47) 2 2
2
and in vertical direction,
and by inserting equations (6.1.46), and (6.1.47) in relation (6.1.44) the nodal displacements at X
node 4 are given by FV = 0 = Fy − SIII − SII sin α2 − SI sin α1
· ¸
v 4h
· ¸
28 √
v = 4x = √ ¢ . (6.1.48) 3 1
v4y
¡
EA 3 + 3 −8 = 10, 0 − 8, 56 · − 5, 17 · + 6, 762 · 0 = 1, 8 · 10−3 ≈ 0. (6.1.56)
2 2

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
172 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 173

6.1.8 The Principle of Virtual Work It is easy to see, that the left-hand side is very similar to the stiffness matrices, and that the right-
hand side is very similar to the load vectors in equations (6.1.36), and (6.1.37), or (6.1.42), and
This £nal section will give a small outlook on the use of the matrix calculus described above. The
(6.1.43). This simple example shows the close relations between the principle of virtual works,
virtual work 1 for a bar under a constant line load, or also called the weak form of equilibrium,
the £nite element methods, and the matrix calculus described above.
is given by Z Z
δW = − N δεx̃ dx̃ + pδux̃ dx̃ = 0. (6.1.57)

The force in normal direction of a bar, see also equations (6.1.17)-(6.1.19), is given by
N = EAεx̃ . (6.1.58)
With this relation the equation (6.1.57) is rewritten,
Z Z
δW = − εx̃T EAδεx̃ dx̃ + pδux̃ dx̃ = 0. (6.1.59)

The vectors given by the equations (6.1.22), these are the various quantities w.r.t. the local
variable x̃ in equation (6.1.57) could be described by
displacement ux̃ = qT û, (6.1.60)
strain u,x̃ = εx̃ = qT,x̃ û, (6.1.61)
virtual displacement δux̃ = qT δ û, (6.1.62)
virtual strain δu,x̃ = δεx̃ = qT,x̃ δ û, (6.1.63)
and the constant nodal values given by the vector û. The vectors q are the so called shape
functions, but in this very simple case, they include only constant values, too. In general this
shape functions are dependent of the position vector, for example in this case the local variable
x̃. This are some of the basic assumptions for £nite elements. Inserting the relations given by
(6.1.60)-(6.1.63), in equation (6.1.59) the virtual work in one element, resp. one bar, is given by
Z Z
¡ T ¢T
δW = − q,x̃ û EAqT,x̃ δ ûdx̃ + pqT δ ûdx̃ = 0. (6.1.64)
·Z ¸ ·Z ¸
= −ûT q,x̃ EAqT,x̃ dx̃ δ û + pqT dx̃ δ û = 0. (6.1.65)

These integrals just describe one element, resp. one bar, but if a summation over more elements
is introduced like this,
X3 X3 · µZ ¶ µZ ¶ ¸
¡ ¢T
δW i = − ûi q,x̃ EAqT,x̃ dx̃ δ ûi + pi qT dx̃ δ ûi = 0, (6.1.66)
i=1 i=1

and £nally like this


3 ·
X µZ ¶ ¸ X3 ·µZ ¶ ¸
¡ i ¢T
û q,x̃ EAqT,x̃ dx̃ δ ûi = pi qT dx̃ δ ûi . (6.1.67)
i=1 i=1
1
See also the refresher course on strength of materials.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
174 Chapter 6. Exercises 6.2. Calculating a Structure with the Eigenvalue Problem 175

6.2 Calculating a Structure with the Eigenvalue Problem 6.2.2 The Equilibrium Conditions after the Excursion
In a £rst step the relations between the variables x 1 , x2 , and the reactive forces FAy , and FBy in
6.2.1 The Problem the supports are solved. For this purpose the equilibrium conditions of moments are established
Establish the homogeneous system of equations for the structure, see sketch (6.8) of rigid bars, for two subsystems, see sketch (6.9). After that in a second step the equilibrium conditions for
in order to determine the critical load Fc = Fcritical ! Assume that the structure is geometrically the whole system are established. The moment equation w.r.t. the node D of the subsystem on
linear, i.e. the angles of excursion ϕ are so small that cos ϕ = 1, and sin ϕ = 0 are good
approximations. The given values are A
-
B ¾
FAx 6 6x1 6 6 Fcritical
1 kN ?
x2
k1 = k , k2 = k
2
, k = 10
cm
, and l = 200cm. y
FAy 3
C0 SI z D 0
?3 FBy
6 2
l SIII 2
l
¾ - ¼ ¾ -

-
x
k1 k2
Figure 6.9: The free-body diagrams of the subsystems left of node C, and right of node D after
the excursion.
EJ = ∞ EJ = ∞ EJ = ∞ ¾
Fcritical the right-hand side of node D after the excursion implies
x1 x2 X 3 2Fc
MD = 0 = FBy · l + Fc · x2 ⇒ FBy = − x2 , (6.2.1)
? ? 2 3l
y
3 3
6
l 2l l
¾
2
-¾ -¾
2
- and the moment equation w.r.t. the node C of the subsystem on the left-hand side of node C after
the excursion implies with the following relation (6.2.3),
-
x X 3 2FAx 2Fc
MC = 0 = FAy · l + FAx · x1 ⇒ FAy = − x1 = − x1 . (6.2.2)
2 3l 3l
Figure 6.8: The given structure of rigid bars.
At any time, and any possible excursion, or for any possible load F c the complete system, cf.
(6.10), must satisfy the equilibrium conditions. The equilibrium condition of forces in horizontal
direction, cf. (6.10), after the excursion is given by
• Rewrite the system of equations so that the general eigenvalue problem for the critical load X
Fc is given by FH = 0 = FAx − Fc ⇒ FAx = Fc . (6.2.3)
A x = Fc B x.
The moment equation w.r.t. the node A for the complete system implies
• Transform the general eigenvalue problem into a special eigenvalue problem. X 7 3
MA = 0 = FBy · 5l + k2 x2 · l + k1 x1 · l, (6.2.4)
• Calculate the eigenvalues, i.e. the critical loads Fc , and the associated eigenvectors. 2 2

• Check if the eigenvectors are orthogonal to each other. with (6.2.1) and k2 = 12 k

• Transform the equation system in such a way, that it is possible to compute the Rayleigh 2Fc 7 3
quotient. What quantity could be estimated with the Rayleigh quotient? 0=− x2 · 5l + kx2 · l + kx1 · l,
3l 4 2

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
176 Chapter 6. Exercises 6.2. Calculating a Structure with the Eigenvalue Problem 177

6.2.3 Transformation into a Special Eigenvalue Problem


6k1 x1 k 2 x2
A 6 B ¾ In order to solve this general eigenvalue problem with the aid of the characteristic equation it is
-
6 6 necessary to transform the general eigenvalue problem into a special eigenvalue problem. Thus
FAx 6 ?x1 6 Fcritical
the equation (6.2.9) is multiplied with the inverse of matrix B from the left-hand side,
C x2
y D
FAy 3
l ?3 l FBy B −1 · | A x = Fc · B x, (6.2.10)
6 ¾
2

2l -¾
2
- B −1 A x = Fc B −1 B x,

- after this multiplication both terms are rewritten on one side and the vector x is factored out,
x
0 = B −1 A x − Fc 1 x,
Figure 6.10: The free-body diagram of the complete structure after the excursion.
and £nally the special eigenvalue problem is given by
0 = (C x − 1 Fc ) x , with C = B −1 A. (6.2.11)
and £nally The inverse of matrix B is assumed by
3 7 10 Fc ·
a b
¸
kx1 + kx2 = x2 . (6.2.5) B −1 = , (6.2.12)
2 4 3 l c d
The equilibrium of forces in vertical direction implies and the following relations must hold
· ¸· 10
¸ · ¸
X a b 0 1 0
FV = 0 = FAy + FBy + k2 x2 + k1 x1 , (6.2.6) B −1 B = 1 , resp. 2
3l
2 = . (6.2.13)
c d 3l 3l
0 1
This simple inversion implies
with (6.2.1), (6.2.2), k1 = k, and k2 = 12 k,
3
b = l, (6.2.14)
2
2Fc 2Fc 1 d = 0, (6.2.15)
0=− x1 − x2 + kx2 + kx1 ,
3l 3l 2 10 2 1 3
a + b = 0 ⇒ a = − b ⇒ a = − l, (6.2.16)
and £nally 3l 3l 5 10
10 2 3
1 2 Fc c + d = 1 ⇒ c = − l, (6.2.17)
kx1 + kx2 = (x1 + x2 ) . (6.2.7) 3l 3l 10
2 3 l
and £nally the inverse B −1 is given by
The relations (6.2.5), and (6.2.7) are combined in a system of equations, given by · 3 ¸
− l 3l
B −1 = 310 2 . (6.2.18)
·3 7
¸· ¸ · 10
¸· ¸
10
l 0
2
k 4
k x1 0 3l
x1
1 = Fc 2 2 , (6.2.8) The matrix C for the special eigenvalue problem is computed by the multiplication of the two
k 2
k x2 3l 3l
x2
2 × 2-matrices B −1 , and A like this,
· 3 ¸· ¸ · 21 ¸
or in matrix notation − l 3 l 32 k 74 k 9
kl 40 kl
C = B −1 A = 310 2 1 = 209 21 ,
10
l 0 k 2k 20
kl 40 kl
A · x = Fc · B · x. (6.2.9) and £nally the matrix C for the special eigenvalue problem is given by
· ¸
This equation system is a general eigenvalue problem, with the F ci being the eigenvalues, and 3 14 3
C = kl . (6.2.19)
the eigenvectors xi0 . 40 6 7

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
178 Chapter 6. Exercises 6.2. Calculating a Structure with the Eigenvalue Problem 179

6.2.4 Solving the Special Eigenvalue Problem It is possible to choose for each eigenvalue Fci the £rst component of of the associated eigenvec-
tor like this,
In order to solve the special eigenvalue problem, the characteristic equation is set up by comput-
xi01 = 1, (6.2.27)
ing the determinant of equation (6.2.11),
and after this to compute the second components of the eigenvectors,
det (C − Fc 1) = 0, (6.2.20)
C11 − Fci i C11 − Fci
xi02 = − x01 , resp. xi02 = − , (6.2.28)
resp. in complete notation, C12 C12
· 21 ¸ or
9
20
kl − Fc 40
kl
det 9 21 = 0. (6.2.21) C21 C21
20
kl 40
kl − Fc xi02 = − xi , resp. xi02 = − . (6.2.29)
C22 − Fci 01 C22 − Fci
Computing the determinant yields
Inserting the £rst eigenvalue F c1 in equation (6.2.28) implies the second component x102 of the
µ ¶µ ¶
21 21 81 2 2 £rst eigenvector,
kl − Fc kl − Fc − k l = 0,
20 40 800 21
kl − 65 kl 2
441 2 2 21 21 81 2 2 x102 = − 20 9 ⇒ x102 = , (6.2.30)
k l − klFc − klFc + Fc2 − k l = 0, 40
kl 3
800 20 40 800
and £nally implies the quadratic equation, and for the second eigenvalue Fc2 the second component x202 of the second eigenvector is given
by
360 2 2 63
k l − klFc + Fc2 = 0. (6.2.22) 21
kl − 38 kl
800 40 x202 = − 20 9 ⇒ x202 = −3. (6.2.31)
40
kl
Solving this simple quadratic equation is no problem,
sµ ¶ With this results the eigenvectors are £nally given by
2
63 63 k 2 l2 360 2 2 · ¸
1
· ¸
1
Fc1/2 = kl ± − k l , x10 = 2 , and x20 = . (6.2.32)
80 40 4 600 −3
r 3
63 1089
Fc1/2 = kl ± kl,
80 6400 6.2.5 Orthogonal Vectors
63 33
Fc1/2 = kl ± kl, (6.2.23) It is suf£cient to compute the scalar product of two arbitrary vectors, in order to check, if this
80 80
two vectors are orthogonal to each other, i.e.
and £nally implies the two real eigenvalues,
x1 ⊥x2 , resp. x1 · x2 = 0. (6.2.33)
6 3
Fc1 = kl = 2400kN , and Fc2 = kl = 750kN . (6.2.24) In this special case the scalar product of the two eigenvectors is given by
5 8
· ¸ · ¸
The eigenvectors x10 , x20 are computed by inserting the eigenvalues Fc1 , and Fc2 in equation 1 1
x10 · x20 = 2 · x20 = = 1 − 2 = −1 6= 0, (6.2.34)
(6.2.11), given by 3
−3

(C − Fci 1) xi0 = 0, (6.2.25) i.e. the eigenvectors are not orthogonal to each other. The eigenvectors for different eigenvalues
are only orthogonal, if the matrix C of the special eigenvalue problem is symmetric 2 . If the
and in complete notation by matrix of a special eigenvalue problem is symmetric all eigenvalues are real, and all eigenvectors
are orthogonal. In this case all eigenvalues Fc1 , Fc2 are real, but the matrix C is not symmetric,
· ¸· ¸ · ¸
C11 − Fci C12 xi01 0 and for that reason the eigenvectors x10 , x20 are not orthogonal.
= . (6.2.26)
C21 C22 − Fci xi02 0 2
See script, section about matrix eigenvalue problems

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
180 Chapter 6. Exercises 6.2. Calculating a Structure with the Eigenvalue Problem 181

6.2.6 Transformation than the whole equation is multiplied with E −1 from the left-hand side,
¡ −1 ¢
The special eigenvalue problem (6.2.11) includes an anti-symmetric matrix C. In order to deter- E D E −1 − Fc E −1 E T E E −1 E q = 0, (6.2.43)
mine the Rayleigh quotient it is necessary to have a symmetric matrix in the special eigenvalue
problem 3 , and with the relation E = E T for symmetric matrices,
¡ −1 ¢
(C − Fc 1) x = 0, (6.2.35) E D E −1 − Fc 1 1 E q = 0, (6.2.44)
With relation (6.2.41) the £rst term in equation (6.2.44) describes a congruence transformation,
and in detail so that the matrix F keeps symmetric 4 ,
½· 21 9
¸ · ¸¾ · ¸ · ¸ ¡ ¢T
kl kl 1 0 x1 0 F = E −1 D E −1 = E −1 D E −1 , (6.2.45)
20
9
40
21 − Fc = . (6.2.36)
20
kl 40
kl 0 1 x2 0
and with the matrix F given by
The £rst step is to transform the matrix C into a symmetric matrix. Because the matrices are · ¸ · 21 ¸· ¸ " 21 #
9 9√
such simple it is easy to see, that if the second column of the matrix is multiplied with 2, the 1 √ 0 kl kl 1 0 kl kl
F = 1
20
9
20
21 1
√ = 209√
20 2
21 (6.2.46)
matrix becomes symmetric, 0 2 2 20 kl 20
kl 0 2
2 20 2
kl 40
kl.
½· 21 9
¸ · ¸¾ · ¸ · ¸
kl 20 kl 1 0 x1 0 Furthermore a new vector p is de£ned by
20
9 − F c = , (6.2.37)
20
kl 2120
kl 0 2 1
x
2 2
0 · ¸· ¸ · ¸
1 √0 x1 x1

p = Eq = 1 ⇒ p= 1 . (6.2.47)
and in matrix notation with the new de£ned matrices D, and E 2 , and a new vector q 0 2 2 x2 2
2x2
¡ ¢ Finally combining this results implies a special eigenvalue problem with a symmetric matrix F
D − Fc E 2 q = 0. (6.2.38) and a vector p,
(F − Fc 1) p = 0, (6.2.48)
Because the matrix E 2 = E T E is a diagonal and symmetric matrix, the matrices E, and E T are
Computing the characteristic equation like in equations (6.2.20), and (6.2.21) yields
diagonal and symmetric matrices, too,
· ¸ · ¸ det (F − Fc 1) = 0, (6.2.49)
1 0 1 √0
E2 = ⇒ E = ET = . (6.2.39)
0 2 0 2 resp. in complete notation,
" #
−1 21 9√
Because the matrix E is a diagonal and symmetric matrix, the inverse E is a diagonal and 20
kl − Fc 20 2
kl
det 9√ 21 = 0, (6.2.50)
symmetric matrix, too, 20 2
kl 40
kl − Fc
· ¸ · ¸· ¸ · ¸
a b a b 1 √0 1 0 and this £nally implies the same characteristic equation like in (6.2.22),
E −1 = ⇒ E −1 E = = = 1, (6.2.40)
c d c d 0 2 0 1 µ ¶µ ¶
21 21 81 2 2
kl − Fc kl − Fc − k l = 0. (6.2.51)
and this implies £nally 20 40 800
· ¸ Having the same characteristic equation implies, that this problem has the same eigenvalues, i.e.
1 0

¡ ¢T
E −1 = 1 , and E −1 = E −1 . (6.2.41) it is the same eigenvalue problem, but just another notation. With this symmetric eigenvalue
0 2
2 problem it is possible to compute the Rayleigh quotient,
The equation (6.2.38) is again a general eigenvalue problem, but now with a symmetric matrix h i pT p pT F p
D. But in order to compute the Rayleigh quotient, it is necessary to set up a special eigenvalue Λ1 = R pν = ν T ν+1 = ν T ν , with Λ1 ≤ Fc1 , (6.2.52)
p ν pν pν pν
problem again. In the next step the identity 1 = E −1 E is inserted in equation (6.2.38), like this,
¡ ¢ with an approximated vector pν . The Rayleigh quotient Λ1 is a good approximation of a lower
D − Fc E 2 E −1 E q = 0, (6.2.42) bound for the dominant eigenvalue.
3 4
See script, section about matrix eigenvalue problems See script, section about the charateristics of congruence transformations.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
182 Chapter 6. Exercises 6.3. Fundamentals of Tensors in Index Notation 183

6.3 Fundamentals of Tensors in Index Notation and


¡ ¢T £ ¤ £ ¤T £ ¤
6.3.1 The Coef£cient Matrices of Tensors Aij B kj = Aij B jk = Di.k ⇔ [Aij ] B kj = [Aij ] B jk = Di.k ⇔ A B T = D,

Like in the lectures a tensor could be described by a coef£cient matrix f ij , and a basis given by because a matrix with exchanged columns and rows is the transpose of the matrix. As a short
ϕij . First do not look at the basis, just look at the coef£cient matrices. In this exercise some recap the product of a square matrix and a column matrix, resp. a (column) vector, is given by
of the most important rules to deal with the coef£cient matrices are recapitulated. The Einstein    1  
summation convention implies for the coef£cient matrix A .ji , and an arbitrary basis ϕi.j , A11 A12 A13 u A11 u1 + A12 u2 + A13 u3
A u = v ⇔ A21 A22 A23  u2  = A21 u1 + A22 u2 + A23 u3  ⇔ Aij uj = vi .
3 X
3
X A31 A32 A33 u3 A31 u1 + A32 u2 + A33 u3
A.ji ϕi.j = A.ji ϕi.j ,
i=1 j=1
For example some products of coef£cient matrices in index and matrix notation,
with the coef£cient matrix A given in matrix notation by ¡ ¢T
 .1  Aij B kj Ckl = Aij B jk Ckl = Dil ⇔ A B T C = D,
A1 A.2 A.3 ¡ ¢T ¡ lm ¢T
£ .j ¤ 1 1 A.ji Bkj Cl.k Dml = A.ji (Bjk )T C.lk D = Ei.m ⇔ A B T C T DT = E,
A = Ai = A.1 2 A.2
2 A.3
2
. ¡ ¢ T
A.1
3 A .2
3 A .3
3
Aij B kj uk = Aij B jk uk = vi ⇔ A B T u = v,
ui B ij uj = α ⇔ uT B v = α.
The dot in the superscript index of the expression A.ji shows which one of the indices is the £rst
index, and which one is the second index. This dot represents an empty space, so in this case it is Furthermore it is important to notice, that the dummy indices could be renamed arbitrarily,
easy to see, that the subscript index i is the £rst index, and the superscript index j is the second
index, i.e. the index i is the row index, and the j is the column index of the coef£cient matrix! Aij B kj = Aim B km , or Akl vl = Akj vj , etc.
This is important to know for the multiplication of coef£cient matrices. For example, what is is
the difference between the following products of coef£cient matrices,
6.3.2 The Kronecker Delta and the Trace of a Matrix
Aij B jk = ? , and Aij B kj = ? .
The Kronecker delta is de£ned by
The left-hand side of £gure (6.11) sketches the £rst product, and the right-hand side the second (
j 1 , iff i=j
- δji = δji = δ ij = δij = .
0 , iff i 6= j
Aij B jk =? B 11 B 12 B 13 Aij B kj =? B 11 B 12 B 13
j B 21 B 22 B 23 B 21 B 22 B 23 The Kronecker deltas δij , and δ ij are only de£ned in a Cartesian basis, where they represent the
j B 31 B 32 B 33 j B 31 B 32 B 33 metric coef£cients. The other ones are de£ned in every basis, and in order to use the summation
?
- - convention, it is useful to prefer this notation with super- and subscript indices. Because the
Kronecker delta is the symmetric identity matrix, it is not necessary to differentiate the column
A11 A12 A13 C1.1 C1.2 C1.3 A11 A12 A13 D1.1 D1.2 D1.3 and row indices in index notation. As a rule of thumb, multiplication with a Kronecker delta
A21 A22 A23 C2.1 C2.2 C2.3 A21 A22 A23 D2.1 D2.2 D2.3 substitues an index in the same position,
A31 A32 A33 C3.1 C3.2 C3.3 A31 A32 A33 D3.1 D3.2 D3.3
vk δjk = vj ⇔ v I = v,
j .j
Figure 6.11: Matrix multiplication. A.k
i δk = A i ⇔ A I = A,
A δi δm = Ask
im s k
⇔ A I I = A.
product. This implies the following important relations,
£ ¤ £ ¤ But what is described by
Aij B jk = Ci.k ⇔ [Aij ] B jk = Ci.k ⇔ A B = C, A.lk δil δki = A.ii ?

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
184 Chapter 6. Exercises 6.3. Fundamentals of Tensors in Index Notation 185

This is the sum over all main diagonal elements, or just called the trace of the matrix, Something similar for the coef£cients of a vector x is given by
£ ¤
tr A = A11 + A22 + A33 = tr Aji , x = x i gi = x i g i ⇔ xi gij gj = xi gi ⇔ xi gij = xj .

and because the trace is a scalar quantity, it is independent of the basis, i.e. it is an invariant, The
£ ¤ h i gki = gik are called the covariant metric coef£cients,
tr A = tr Aji = tr Ãji , i.e. Aii = Ãii .
and the
g ki = g ik are called the contravariant metric coef£cients.
For example in the 2-dimensional vector space E2 the Kronecker delta is de£ned by
Finally this implies for the base vectors and for the coef£cients or coordinates of vectors and
g2 O º g2 tensors, too. This implies, raising an index with the contravariant metric coef£cients is given by

gk = g ki gi , xk = g ki xi , and Aik = g ij A.k


j ,
1
g
: and lowering an index with the covariant metric coef£cients is given by

gk = gki gi , xk = gki xi , and Aik = gij Aj.k .


z
g1 The relations between the co- and contravariant metric coef£cients are given by
Figure 6.12: Example of co- and contravariant base vectors in E . 2
gk = g km gm ⇔ gk · gi = g km gm · gi ⇔ δik = g km gmi .

Comparing this with A−1 A = I implies


(
1 i=k 1
gi · gk = δik =
£ ¤ £ ¤ £ ¤
, 1 = g km [gmi ] ⇔ g km = [gmi ]−1 ⇔ det g ik = .
0 i 6= k det [gik ]

than an arbitrary co- and contravariant basis is given by Than the determinants of the co- and contravariant metric coef£cients are de£ned by

g1 · g 2 =0 ⇔ g1 ⊥g2 , £ ¤ 1
det [gik ] = g , and det g ik = .
g2 · g 1 =0 ⇔ g2 ⊥g1 , g
g1 · g 1 = 1,
6.3.4 Permutation Symbols
g2 · g 2 = 1.
The cross products of the Cartesian base vectors ei in the 3-dimensional Euclidean vector space
E3 are given by
6.3.3 Raising and Lowering of an Index
If the vectors gi and gk are in the same space V, it must be possible to describe g k by a product e1 × e 2 = e 3 = e 3 , e2 × e3 = e1 = e1 , and e 3 × e1 = e2 = e2 ,
of gi and some coef£cient like A km , e2 × e1 = −e3 = −e3 , e3 × e2 = −e1 = −e1 , and e1 × e3 = −e2 = −e2 .

gk = Akm gm . Often the cross product is also described by a determinant,


¯ ¯
Both sides of the equations are multiplied with g i , and £nally the index i is renamed by m, ¯ e1 e 2 e 3 ¯
¯ ¯
u × v = ¯¯u1 u2 u3 ¯¯ = e1 (u2 v3 − u3 v2 ) + e2 (u3 v1 − u1 v3 ) + e3 (u1 v2 − u2 v1 ) .
gk · gi = Akm gm · gi ⇔ g ki = Akm δm
i
⇔ g ki = Aki ⇔ g km = Akm . ¯ v1 v2 v3 ¯

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
186 Chapter 6. Exercises 6.3. Fundamentals of Tensors in Index Notation 187

The permutation symbol in Cartesian coordinates is given by 6.3.5 Exercises



+1 , iff (i, j, k) is an even permutation of (1, 2, 3),
 1. Simplify the index notation expressions and write down the matrix notation form.
eijk = −1 , iff (i, j, k) is an odd permutation of (1, 3, 2),

 (a) A.ji Bkj C.lk =
0 , if two or more indices are equal.
(b) Aij Bik Ckl =
The cross products of the Cartesian base vectors could be described by the permutaion symbols .n .m
(c) Cm Dn =
like this,
ei × ej = eijk ek , (d) Dmn E.lm ul =
and for example (e) ui Dj.i Ek.j =

e1 × e2 = e121 · e1 + e122 · e2 + e123 · e3 = 0 · e1 + 0 · e2 + 1 · e3 = e3 2. Simplify the index notation expressions.


e1 × e3 = e131 · e1 + e132 · e2 + e133 · e3 = 0 · e1 + (−1) · e2 + 0 · e3 = −e2 .
(a) Aij g jk =
The general permutation symbol is given by the covariant ε symbol,
(b) Aij δkj =
 √
+ g , iff (i, j, k) is an even permutation of (1, 2, 3),
 (c) Aij B jk δm
i n
δk =

εijk = − g , iff (i, j, k) is an odd permutation of (3, 2, 1), (d) Akl δji δm
j ml
g =


0 , if two or more indices are equal,
(e) Aij B kj g im gkn δm
n
=
or by the contravariant ε symbol, (f) A.lk B km gmi g in δjn =
 1
+ √g if (i, j, k) is an even permutation of (1, 2, 3),
 3. Rewrite these expressions in index notation w.r.t. a Cartesian basis, and describe what kind
ijk
ε = − √1g if (i, j, k) is an odd permutation of (3, 2, 1), of quantity the result is.


0 if two or more indices are equal.
(a) (a × b) · c =
With this relations the cross products of covariant base vectors are given by (b) a × b + (a · d) c =
gi × gj = εijk gk , (c) (a × b) · (c × d) =
and for the corresponding contravariant base vectors (d) a × (b × c) =
i j ijk
g × g = ε gk , 4. Combine the base vectors of a general basis and simplify the expressions in index notation.
and the following relations between the Cartesian and the general permutation symbols hold (a) u · v = (ui gi ) · (v j gj ) =
√ 1 (b) u · v = (ui gi ) · (vj gj ) =
εijk = geijk , and eijk = √ εijk ,
g
(c) u × v = (ui gi ) × (v j gj ) =
and (d) u × v = (ui gi ) × (vj gj ) =
¡ ¢
1 √ ijk (e) (u × v) · w = [(ui gi ) × (vj gj )] · wk gk =
εijk = √ eijk , and eijk = gε .
g £¡ ¢ ¡ ¢¤
(f) (u · v) (w × x) = [(ui gi ) · (v j gj )] wk gk × xl gl =
£ ¡ ¢¤
An important relation, in order to simplify expressions with permutation symbols, is given by (g) u × (v × w) = (ui gi ) × (vj gj ) × wk gk =
¡ i j ¢ £¡ ¢ ¡ ¢¤
eijk emnk = δm δn − δni δm
j
. (h) (u × v) · (w × x) = [(ui gi ) × (v j gj )] · wk gk × xl gl =

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
188 Chapter 6. Exercises 6.3. Fundamentals of Tensors in Index Notation 189

6.3.6 Solutions (c) u × v = (ui gi ) × (v j gj ) = ui v j gi × gj = ui v j εijk gk = wk gk = w


1. Simplify the index notation expressions and write down the matrix notation form. (d) u × v = (ui gi ) × (vj gj ) ⇔

(a) A.ji Bkj C.lk = A.ji T


(Bjk ) C.lk = Dil ⇔ A BT C = D u vj gi × g = ui vj gik gk × gj = uk vj εkjl gl = ui vj εijk gk
i j

(b) Aij Bik Ckl = (Aji )T B ik Ckl = Djl ⇔ AT B C = D ⇔ wk g k = w


.n .m .m
(c) Cm Dn = E m =α ⇔ tr (C D) = tr E = α ¡
(e) (u × v) · w = [(ui gi ) × (vj gj )] · wk gk
¢

T T
(d) Dmn E.lm ul = (Dnm ) E.lm ul = vn ⇔ D Eu=v
¡ ¢T ¡ j ¢T ui vj wk εijl gl · gk = ui vj wk εijl glk = ui vj wl εijl
(e) ui Dj.i Ek.j = ui D.ji E.k = vk ⇔ uT D T E T = v T
⇔ ui vj wk εijk = α
2. Simplify the index notation expressions. £¡ ¢ ¡ ¢¤
(f) (u · v) (w × x) = [(ui gi ) · (v j gj )] wk gk × xl gl ⇔
jk
(a) Aij g = A.k
i ¡ i j ¢¡ k l ¢ ¡ i j ¢¡ k l ¢
(b) Aij δkj = Aik u v gi · gj w x gk × gl = u v gij w x εklm gm
(c) Aij B jk δm
i n
δk = Amj B jn = Cm
.n = ui vi wk xl εklm gm = ym gm
(d) Akl δji δm
j ml
g = A.ik ⇔ ym g m = y
kj im n j T j T
£ ¡ ¢¤
(e) Aij B g gkn δm = Am
.j (B.n ) n
δm = Am
.j (B.m ) =α (g) u × (v × w) = (ui gi ) × (vj gj ) × wk gk ⇔
¡ ¢T
(f) A.lk B km gmi g in δjn = Al.k B kj = C lj ¡ ¢ ¡ ¢
ui gi × vj wk gkl gj × gl = ui vj wl gi × εjlm gm = ui vj wl εjlm εimn gn
¡ ¢
3. Rewrite these expressions in index notation w.r.t. a Cartesian basis, and describe what kind = ui vj wl δij δnl − δnj δil gn = ui vi wn gn − ui wi vn gn = xn gn
of quantity the result is.
⇔ (u · v) w − (u · w) v = x
(a) (a × b) · c = α ⇔ ai bj eijk ck = α £¡ ¢ ¡ ¢¤
(h) (u × v) · (w × x) = [(ui gi ) × (v j gj )] · wk gk × xl gl ⇔
(b) a × b + (a · d) c = v ⇔ ai bj eijk + (ai di ) ck = v k
£ j im ¤ £ ¡ ¢¤ £ ¤ £ ¤
(c) (a × b) · (c × d) = β ⇔ ui v g (gm × gj ) · wk xl gln gk × gn = um v j εmjo go · wk xn εknp gp
¡ ¢ ¡ i j ¢ = um v j εmjo wk xn εknp go · gp
ai bj eijk (cm dn emnk ) = ai bj cm dn eijk emnk = ai bj cm dn δm δn − δni δm
j
= um v j εmjo wk xn εknp δpo
= a m b n c m d n − a n bm c m d n = β
= um v j wk xn εmjp εknp
¡ k n ¢
⇔ (a · c) (b · d) − (a · d) (b · c) = β = u m v j wk x n δ m n k
δj − δ m δj
(d) a × (b × c) = v ⇔ = u k v n wk x n − u n v k wk x n
¡ ¢ ¡ ¢
elkm al eijk bi cj = al bi cj elkm eijk = al bi cj δil δjm − δim δjl ⇔ (u · w) (v · x) − (u · x) (v · w) = α
= a i bi c m − a j bm c j = v m

⇔ (a · b) c − (a · c) b = v

4. Combine the base vectors of a general basis and simplify the expressions in index notation.

(a) u · v = (ui gi ) · (v j gj ) = ui v j gi · gj = ui v j gij = ui vi = α


(b) u · v = (ui gi ) · (vj gj ) = ui vj gi · gj = ui vj δij = ui vi = α

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
190 Chapter 6. Exercises 6.4. Various Products of Second Order Tensors 191

6.4 Various Products of Second Order Tensors 6.4.4 Exercises


1. Compute the tensor products.
6.4.1 The Product of a Second Order Tensor and a Vector ¡ ¢
The product of a second order tensor and a vector (, i.e. a £rst order tensor,) is computed by the (a) TS = (Tij gi ⊗ gj ) Skl gk ⊗ gl =
¡ ¢
scalar product of the last base vector of the tensor and the base vector of the vector. For example (b) TS = (Tij gi ⊗ gj ) S kl gk ⊗ gl =
a product of a second order tensor and a vector is given by ¡ ¢¡
(c) TS = Ti.j gi ⊗ gj Sk.l gk ⊗ gl =
¢
¡ ¢
scalar
p product (d) TS = (Tij gi ⊗ gj ) Sk.l gk ⊗ gl =
¡ i j
¢¡ qk ¢ ¡ ¢
v = Tu = Tij g ⊗ g uk g (e) T1 = (Tij gi ⊗ gj ) δkl gk ⊗ gl =
¡ ¢
= Tij uk gj · gk gi ¡ j i ¢¡ l k
(f) 11 = δi g ⊗ gj δk g ⊗ gl =
¢
= Tij uk g jk gi ¡ ¢
(g) Tg = (Tij gi ⊗ gj ) gkl gk ⊗ gl =
= Tij uj gi ¡ ¢
(h) TT = (Tij gi ⊗ gj ) Tkl gk ⊗ gl =
v = Tu = vi gi , with vi = Tij uj . ¡ ¢T
(i) TTT = (Tij gi ⊗ gj ) Tkl gk ⊗ gl =

6.4.2 The Tensor Product of Two Second Order Tensors 2. Compute the scalar products.
¡ ¢
The tensor product of two second order tensors is computed by the scalar product of the two (a) T : S = (Tij gi ⊗ gj ) : Skl gk ⊗ gl =
inner base vectors and the dyadic product of the two outer base vectors. For example a tensor ¡ ¢
(b) T : S = (Tij gi ⊗ gj ) : S kl gk ⊗ gl =
product is given by ¡ ¢ ¡ ¢
(c) T : S = Ti.j gi ⊗ gj : Sk.l gk ⊗ gl =
¡ dyadic
¢¡ product q ¢ ¡ ¢
p
R = TS = T ij gi ⊗gj S kl gk ⊗ gl (d) T : 1 = (Tij gi ⊗ gj ) : δkl gk ⊗ gl =
x scalar y ¡ ¢ ¡ ¢
product (e) 1 : 1 = δij gi ⊗ gj : δkl gk ⊗ gl =
¡ ¢
= T ij S kl (gj · gk ) (gi ⊗ gl ) (f) T : g = (Tij gi ⊗ gj ) : gkl gk ⊗ gl =
= T ij S kl gjk gi ⊗ gl
¡ ¢
(g) T : T = (Tij gi ⊗ gj ) : Tkl gk ⊗ gl =
= T ij Sj.l gi ⊗ gl ¡ ¢T
(h) T : TT = (Tij gi ⊗ gj ) : Tkl gk ⊗ gl =
R = TS = Ril gi ⊗ gl , with Ril = T ij Sj.l .
3. Compute the various products.
¡ ¢¡ ¢
6.4.3 The Scalar Product of Two Second Order Tensors (a) (TS) v = TSv = Ti.j gi ⊗ gj Sk.l gk ⊗ gl (vm gm ) =
¡ ¢ ¡ ¢
The scalar product of two second order tensors is computed by the scalar product of the £rst base (b) (T : S) v = T : Sv = Ti.j gi ⊗ gj : Sk.l gk ⊗ gl (vm gm ) =
vectors of the two tensors and the scalar product of the two second base vectors of the tensors, ¡ ¢ ¡ ¢ ¡h ¢ i
too. For example a scalar product is given by (c) tr TTT = δij gi ⊗ gj : Tkl gk ⊗ gl (Tmn gm ⊗ gn )T =

scalar product
¡ p ¢ ¡ q ¢
α = T : S = T ij gi ⊗ gj : S kl gk ⊗ gl
xscalar product y

ij kl
= T S (gi · gk ) (gj · gl )
= T ij S kl gik gjl
α = T : S = T ij Sij .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
192 Chapter 6. Exercises 6.4. Various Products of Second Order Tensors 193
¡ ¢ ¡ ¢
6.4.5 Solutions (b) (T : S) v = T : Sv = Ti.j gi ⊗ gj : Sk.l gk ⊗ gl (vm gm )
.j .l ik m kj m
= Ti Sk g gjl vm g = T Skj vm g = αv = w
1. Compute the tensor products.
¡ ¢ ¡ ¢ h¡ ¢ i
¡ ¢ (c) tr TTT = δij gi ⊗ gj : Tkl gk ⊗ gl (Tmn gm ⊗ gn )T
(a) TS = (Tij gi ⊗ gj ) Skl gk ⊗ gl ¡ ¢ £ ¤
j i
= Tij Skl g g ⊗ g = Tij S.l g ⊗ gl = Ril gi ⊗ gl = R
jk i l = δij gi ⊗ gj : Tkl Tm.l gk ⊗ gm = δij Tkl Tm.l g ik δjm = δij T.li Tj.l = T.lj Tj.l = T : T
¡ ¢
(b) TS = (Tij gi ⊗ gj ) S kl gk ⊗ gl = Tij S kl δkj gi ⊗gl = Tij S jl gi ⊗gl = Ri.l gi ⊗gl = R
¡ .j i ¢ ¡ .l k ¢
(c) TS = Ti g ⊗ gj Sk g ⊗ gl = Ti.j Sk.l δjk gi ⊗gl = Ti.j Sj.l gi ⊗gl = Ri.l gi ⊗gl = R
¡ ¢
(d) TS = (Tij gi ⊗ gj ) Sk.l gk ⊗ gl = Tij Sk.l g jk gi ⊗gl = Tij S jl gi ⊗gl = Ri.l gi ⊗gl = R
¡ ¢
(e) T1 = (Tij gi ⊗ gj ) δkl gk ⊗ gl = Tij δkl g jk gi ⊗ gl = Tij g jl gi ⊗ gl = Tij gi ⊗ gj = T
¡ ¢¡ ¢
(f) 11 = δij gi ⊗ gj δkl gk ⊗ gl = δij δkl δjk gi ⊗ gl = δil gi ⊗ gl = δij gi ⊗ gj = 1
¡ ¢
(g) Tg = (Tij gi ⊗ gj ) gkl gk ⊗ gl = Tij gkl g jk gi ⊗gl = Tij δlj gi ⊗gl = Tij gi ⊗gj = T
¡ ¢
(h) TT = (Tij gi ⊗ gj ) Tkl gk ⊗ gl = Tij Tkl g jk gi ⊗ gl = Tij T.lj gi ⊗ gl = T2
¡ ¢T
(i) TTT = (Tij gi ⊗ gj ) Tkl gk ⊗ gl
¡ ¢
= (Tij gi ⊗ gj ) Tkl gl ⊗ gk = Tij Tkl g jl gi ⊗ gk = Tij Tk.j gi ⊗ gk = Tij Tl.j gi ⊗ gl , or
¡ ¢T
TTT = (Tij gi ⊗ gj ) Tkl gk ⊗ gl
¡ ¢
= (Tij g ⊗ g ) Tlk g ⊗ g = Tij Tlk g jk gi ⊗ gl = Tij Tl.j gi ⊗ gl
i j k l

2. Compute the scalar products.


¡ ¢
(a) T : S = (Tij gi ⊗ gj ) : Skl gk ⊗ gl = Tij Skl g ik g jl = Tij S ij = α
¡ ¢
(b) T : S = (Tij gi ⊗ gj ) : S kl gk ⊗ gl = Tij S kl δki δlj = Tij S ij = α
¡ ¢ ¡ ¢
(c) T : S = Ti.j gi ⊗ gj : Sk.l gk ⊗ gl = Ti.j Sk.l g ik gjl = Ti.j S.ji = α, or
Ti.j Sk.l g ik gjl = Til S il = α, or
Ti.j Sk.l g ik gjl = T kj Skj = α
¡ ¢
(d) T : 1 = (Tij gi ⊗ gj ) : δkl gk ⊗ gl = Tij δkl g ik δlj = Tij g ij = Ti.i = tr T
¡ j i ¢ ¡ l k ¢
(e) 1 : 1 = δi g ⊗ gj : δk g ⊗ gl = δij δkl g ik gjl = δij δji = δii = 3 = tr 1
¡ ¢
(f) T : g = (Tij gi ⊗ gj ) : gkl gk ⊗ gl = Tij gkl g ik g jl = Tij δli g jl = Tij g ji = Ti.i = tr T
¡ ¢ ¡ ¢
(g) T : T = (Tij gi ⊗ gj ) : Tkl gk ⊗ gl = Tij Tkl g ik g jl = Tij T ij = tr TTT
¡ ¢T
(h) T : TT = (Tij gi ⊗ gj ) : Tkl gk ⊗ gl
¡ ¢
= (Tij gi ⊗ gj ) : Tkl gl ⊗ gk = Tij Tkl g il g jk = Tij T ji = tr (T)2 , or
¡ ¢T
T : TT = (Tij gi ⊗ gj ) : Tkl gk ⊗ gl
¡ ¢
= (Tij gi ⊗ gj ) : Tlk gk ⊗ gl = Tij Tlk g ik g jl = Tij T ji = tr (T)2
3. Compute the various products.
¡ ¢¡ ¢
(a) (TS) v = TSv = Ti.j gi ⊗ gj Sk.l gk ⊗ gl (vm gm )
.j .l k
= Ti Sk δj vm (gi ⊗ gl ) gm = Ti.k Sk.l vm δlm gi = Ti.k Sk.l vl gi = ui gi = u

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
194 Chapter 6. Exercises 6.5. Deformation Mappings 195

6.5 Deformation Mappings 6.5.2 Exercises


1. Compute the tensor products. What is represented by them?
6.5.1 Tensors of the Tangent Mappings
The material deformation gradient FX is given by (a) K−1 −T
Θ KΘ =

(b) F−1 −T
Θ FΘ =
FX := GradX ϕ = gi ⊗ Gi , (6.5.1)
(c) FX FTX =
the local geometry gradient KΘ is given by
(d) F−1 −T
X FX =
KΘ := GRADΘ ψ̃ = Gi ⊗ Zi , (6.5.2)
(e) 1X 1TX =
and the local deformation gradient FΘ is given by
(f) 1x 1Tx =
FΘ := GRADΘ ϕ̃ = gi ⊗ Zi . (6.5.3)
2. Compute the tensor products in index notation, and name the result with the correct name.
The different tangent mappings of the various tangent mappings are given by
(a) K−T −1
Θ MΘ K Θ =
FX = g i ⊗ Gi , FTX = Gi ⊗ gi , F−1
X = Gi ⊗ g
i
, F−T i
X = g ⊗ Gi , (6.5.4)
(b) KTΘ MX KΘ =
KΘ = G i ⊗ Zi , KTΘ = Zi ⊗ Gi , K−1
Θ = Zi ⊗ G
i
, K−T i
Θ = G ⊗ Zi , (6.5.5)
FΘ = g i ⊗ Zi FTΘ = Zi ⊗ gi , F−1 i
, F−T = g i ⊗ Zi . (c) F−T −1
Θ mΘ F Θ =
, Θ = Zi ⊗ g Θ (6.5.6)
(d) FTΘ mx FΘ =
The identity tensors are introduced separately for the various coordinate system by
(e) F−T −1
X EX F X =
identity tensor of the parameter space - 1Θ := Zi ⊗ Zi , (6.5.7)
(f) FTX Ex FX =
identity tensor of the undeformed space - 1X := Gi ⊗ Gi , (6.5.8)
identity tensor of the deformed space - 1x := gi ⊗ gi . (6.5.9) 3. Compute the tensor and scalar products in index notation. Rewrite the results in tensor
notation.
The various metric tensors of the different tangent spaces are introduced by
local metric tensor of the undeformed body - MΘ = KTΘ KΘ = Gij Zi ⊗ Zj , (6.5.10) (a) BΘ = M−1
Θ mΘ =

local metric tensor of the deformed body - mΘ = FTΘ FΘ = gij Zi ⊗ Zj , (6.5.11) (b) BX =
material metric tensor of the undeformed body - MX = 1TX 1X = Gij Gi ⊗ Gj , (6.5.12) (c) Bx =
material metric tensor of the deformed body - mX = FTX FX i j
= gij G ⊗ G , (6.5.13) (d) BΘ : 1Θ =
spatial metric tensor of the undeformed body - Mx = F−T
X FX
−1
= Gij gi ⊗ gj , (6.5.14) (e) BTΘ : BΘ =
spatial metric tensor of the undeformed body - mx = T
1 x 1x i
= gij g ⊗ g j
. (6.5.15) (f) BTΘ BTΘ : BΘ =
The local strain tensors is given by
1 1
EΘ := (mΘ − MΘ ) = (gij − Gij ) Zi ⊗ Zj , (6.5.16)
2 2
the material strain tensors is given by
1 1
EX := (mX − MX ) = (gij − Gij ) Gi ⊗ Gj , (6.5.17)
2 2
and £nally the spatial strain tensors is given by
1 1
Ex := (mx − Mx ) = (gij − Gij ) gi ⊗ gj . (6.5.18)
2 2

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
196 Chapter 6. Exercises 6.5. Deformation Mappings 197

6.5.3 Solutions (e) BTΘ¡ : BΘ = ¢ ¡ ¢


= Gij gjk Zk ⊗ Zi : Glm gmn Zl ⊗ Zn = Gij gjk Glm gmn δlk δin
1. Compute the tensor products. What is represented by them?
= Gij gjk Gkm gmi = tr B2Θ
¡ T ¢−1
(a) K−1 −T i j ij
Θ KΘ = (Zi ⊗ G ) (G ⊗ Zj ) = G Zi ⊗ Zj = MΘ = KΘ KΘ
−1 (f) BTΘ BTΘ : BΘ = ¡ ij ¢ ¡ lm ¢
pr s k
¡ T ¢−1 ¡ prgrs Z ij⊗ Zpk) sG gjk¢Z ¡⊗ Z
= (G i : G gmn¢ Zl ⊗ Z n
(b) F−1 −T i j ij −1
Θ FΘ = (Zi ⊗ g ) (g ⊗ Zj ) = g Zi ⊗ Zj = mΘ = FΘ FΘ lm
= G grs G gjk δp Z ⊗ Zi ¡ : G gmn Zl ⊗ Z n
¡ −T −1 ¢−1 ¢ =
(c) FX FTX = (gi ⊗ Gi ) (Gj ⊗ gj ) = Gij gi ⊗ gj = M−1
x = FX FX
= (Gpr grs Gij gjp Zs ⊗ Zi ) : Glm gmn Zl ⊗ Zn = Gpr grs Gij gjp Glm gmn δls δin
¡ T ¢−1 = Gpr grs Gsm gmn Gnj gjp = tr B3Θ
(d) F−1 −T i j ij −1
X FX = (Gi ⊗ g ) (g ⊗ Gj ) = g Gi ⊗ Gj = mX = FX FX

(e) 1X 1TX = (Gi ⊗ Gi ) (Gj ⊗ Gj ) = Gij Gi ⊗ Gj = M−1


X

(f) 1x 1Tx = (gi ⊗ gi ) (gj ⊗ gj ) = g ij gi ⊗ gj = m−1


x

2. Compute the tensor products in index notation, and name the result with the correct name.

(a) K−T −1
¡ MΘ K Θ ¢ =
Θ ¡ ¢ ¡ ¢
= Gk ⊗ Zk (Gij Zi ⊗ Zj ) Zl ⊗ Gl = Gij δki δlj Gk ⊗ Gl = Gij Gi ⊗Gj = MX
(b) KTΘ¡MX KΘ =
¢ ¡ ¢ ¡ ¢
= Zk ⊗ Gk (Gij Gi ⊗ Gj ) Gl ⊗ Zl = Gij δki δlj Zk ⊗ Zl = Gij Zi ⊗ Zj = MΘ
(c) F−T −1
Θ¡ mΘ FΘ ¢= ¡ ¢ ¡ ¢
= gk ⊗ Zk (gij Zi ⊗ Zj ) Zl ⊗ gl = gij δki δlj gk ⊗ gl = gij gi ⊗ gj = mx
(d) FTΘ¡mx FΘ =¢ ¡ ¢ ¡ ¢
= Zk ⊗ gk (gij gi ⊗ gj ) gl ⊗ Zl = gij δki δlj Zk ⊗ Zl = gij Zi ⊗ Zj = mΘ
(e) F−T −1
X¡ EX FX ¢ = ¡ ¢ ¡ ¢
= gk ⊗ Gk (gij − Gij ) (Gi ⊗ Gj ) Gl ⊗ gl = (gij − Gij ) δki δlj gk ⊗ gl
i j
= (gij − Gij ) g ⊗ g = Ex
(f) FTX¡Ex FX = ¢ ¡ ¢ ¡ ¢
= Gk ⊗ gk (gij − Gij ) (gi ⊗ gj ) gl ⊗ Gl = (gij − Gij ) δki δlj Gk ⊗ Gl
= (gij − Gij ) Gi ⊗ Gj = EX

3. Compute the tensor and scalar products in index notation. Rewrite the results in tensor
notation.

(a) BΘ = M−1 Θ mΘ = ¡ ¢
= (Gij Zi ⊗ Zj ) glk Zl ⊗ Zk = Gij gkl δjl Zi ⊗ Zl = Gij gjk Zi ⊗ Zk
(b) BX = M−1 X mX = ¡ ¢
= (Gij Gi ⊗ Gj ) glk Gl ⊗ Gk = Gij gkl δjl Gi ⊗ Gl = Gij gjk Gi ⊗ Gk
(c) Bx = M−1 x mx = ¡ ¢
= (Gij gi ⊗ gj ) glk gl ⊗ gk = Gij gkl δjl gi ⊗ gl = Gij gjk gi ⊗ gk
(d) BΘ¡ : 1Θ = ¢ ¡ ¢
= Gij gjk Zi ⊗ Zk : Zl ⊗ Zl = Gij gjk Zil Z kl = Gij gjk δik = Gij gji = tr BΘ

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
198 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 199

6.6 The Moving Trihedron, Derivatives and Space Curves 6.6.2 The Base vectors
The winding up of the spiral staircase is given by the sketch in £gure (6.14). With the Pythagoras
6.6.1 The Problem
A spiral staircase is given by the sketch in £gure (6.13). The relation between the gradient angle 6
Θ1 angle between zero and a half rotation
¾ r - πr
R 0 ≤ Θ1 ≤ cos α
e1 ¾ e3
top, height h/2 bottom, £xed support
ϕR h
ϕ = aϕ

Θ1 ?e2

R
α
? ?

¾ -

Figure 6.13: The given spiral staircase. Figure 6.14: The winding up of the given spiral staircase.

α and the overall height h of the spiral staircase is given by theorem, see also the sketch in £gure (6.14), the relationship between the variable ϕ and the
h variable Θ1 along the central line is given by
tan α = ,
2πr p h
if h is the height of a 360 spiral staircase, here the spiral staircase is just about 180o . The spiral
o Θ1 = a 2 ϕ2 + r 2 ϕ2 , and a = .

staircase has a £xed support at the bottom, and the central line is represented by the variable Θ 1 ,
which starts at the top of the spiral staircase. This implies
1
• Compute the tangent t, the normal n, and the binormal vector b w.r.t. the variable ϕ. ϕ= √ Θ1 ,
a2 + r 2
1 1
• Determine the curvature κ = ρ
and the torsion ω = τ
of the curve w.r.t. to the variable ϕ. and with the de£nition of the cosine
• Compute the Christoffel symbols rϕ r
cos α = =√ ,
Θ1 a2 + r 2
Γi1r , for i, r = 1, 2, 3.
£nally the relationship between the variables ϕ, and Θ 1 is given by
• Describe the forces and moments in a sectional area w.r.t. to the basis given by the moving
trihedron, with the following conditions, cos α 1 cos α 1
ϕ= Θ = cΘ1 , with c= =√ .
M = M ai i i
, resp. N = N ai , with M = M̂i (ϕ) i
, resp. i
N = N̂i (ϕ) r r r 2 + a2
{a1 , a2 , a3 } = {t, n, b} . With this relation it is easy to see, that every expression depending on ϕ is also depending on Θ 1 ,
this is later on important for computing some derivatives. Any arbitrary point on the central line
• Compute the resulting forces and moments at an sectional area given by ϕ = 130 o . Con-
of the spiral staircase could be represented by a vector of position x in the Cartesian coordinate
sider a load vector given in the global Cartesian coordinate system by
system, ¡ ¢
R̄ = −qϕr e3 , x = xi ei , and xi = x̂i Θ1 ,
at a point S given by the angle ϕ2 , and the radius rS . This load maybe a combination of the or
self-weight of the spiral staircase and the payload of its usage. x = x i ei , and xi = x̂i (r, ϕ) .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
200 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 201

The three components of the vector of position x are given by The binormal vector b = a3 is de£ned by

x1 = r cos ϕ, b=t×n , resp. a3 = a1 × a2 ,


x2 = r sin ϕ,
and with the de£nition of the cross product, represented by the expansion about the £rst column,
h³ ϕ´
x3 = 1− , the binormal is given by
2 π
¯ ¯  
and the complete vector is given by ¯e1 −cr sin ϕ − cos ϕ¯ −ca sin ϕ
¯ ¯
b = ¯e2 cr cos ϕ − sin ϕ ¯ =  ca cos ϕ  .
¯ ¯
h³ ϕ´ ¯e3 −ca 0 ¯ cr
x = xi ei = x̂i (r, ϕ) ei = (r cos ϕ) e1 + (r sin ϕ) e2 + 1− e3 , and ϕ = cΘ1 .
2 π
The tangent vector t = a1 is de£ned by The absolute value of this vector is given by

dx ∂x ∂ϕ
q √
t = a1 = = · , |b| = c a2 sin2 ϕ + a2 cos2 ϕ + r2 = c a2 + r2 = 1,
dΘ1 ∂ϕ ∂Θ1
dϕ and with this the binormal vector b is already an unit vector given by
and with dΘ1
= c this implies
   
    −h sin ϕ −ca sin ϕ
−r sin ϕ −cr sin ϕ c 
b = a3 = h cos ϕ  ⇒ b = a2 =  ca cos ϕ  .
t = a1 =  r cos ϕ  c ⇒ t = a1 =  cr cos ϕ  . 2π
h
2πr cr
− 2π −ca
The absolute value of this vector is given by 6.6.3 The Curvature and the Torsion
q √ The curvature κ = 1
of a curve in space is given by
|t| = |a1 | = c r2 sin2 ϕ + r2 cos2 ϕ + a2 = c r2 + a2 = 1, ρ

i.e. the tangent vector t is already an unit vector! For the normal unit vector n = a 2 £rst the 1 dt da1 d2 x
κ= = |n∗ | , and n∗ = 1
= 1
= 2 1,
normal vector n∗ is computed by ρ dΘ dΘ dΘ

dt da1 ∂a1 ∂ϕ d2 x and in this case it implies a constant valued curvature


n∗ = 1
= 1
= · 1
= 2 1,
dΘ dΘ ∂ϕ ∂Θ dΘ 1
κ= = |n∗ | = c2 r = constant.
and this implies with the result for vector a1 , ρ
 
cos ϕ 1
The torsion ω = τ
of a curve in space is de£ned by the £rst derivative of the binormal vector,
n∗ = −c2 r  sin ϕ  .
0 1
b=t×n , with b,1 = ωn = n,
τ
The absolute value of this vector is given by
q and here the derivative  
1 −c2 a cos ϕ
|n∗ | = c2 r cos2 ϕ + sin2 ϕ = c2 r = ,
ρ b,1 =  −c2 a sin ϕ  = −c2 an,
0
and with this £nally the normal unit vector is given by
  implies a constant torsion, too,
− cos ϕ

n = a2 = ρn =  − sin ϕ  . 1
ω= = −c2 a = constant.
0 τ

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
202 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 203
   
6.6.4 The Christoffel Symbols sin ϕ −cr sin ϕ
Γ211 = a2,1 · a1 = c − cos ϕ ·  cr cos ϕ  = −c2 r,
The Christoffel symbols are de£ned by the derivatives of the base vectors, here the moving 0 −ca
trihedron given by {a1 , a2 , a3 } = {t, n, b},    
sin ϕ − cos ϕ
2
ai,1 = Γi1r ar | ·ak Γ21 = a2,1 · a2 = c − cos ϕ · − sin ϕ  = 0,
  
k 0 0
ai,1 · a = Γi1r ar · a = Γi1r δrk
k
   
sin ϕ −ca sin ϕ
and £nally the de£nition of a single Christoffel symbol is given by Γ21 = a2,1 · a3 = c − cos ϕ ·  ca cos ϕ  = −c2 a,
3

0 cr
Γi1k = ai,1 · ak .    
−c2 a cos ϕ −cr sin ϕ
This de£nition implies, that it is only necessary to compute the scalar products of the base vectors Γ311 = a3,1 · a1 =  −c2 a sin ϕ  ·  cr cos ϕ  = 0,
and their £rst derivatives, in order to determine the Christoffel symbols. For this reason in a £rst 0 −ca
step all the £rst derivatives w.r.t. the variable ϕ of the base vectors of the moving trihedron are  2   
−c a cos ϕ − cos ϕ
determined. The £rst derivative of the base vector a 1 is given by
Γ312 = a3,1 · a2 = −c a sin ϕ · − sin ϕ  = c2 a,
 2  

−cr cos ϕ
 
− cos ϕ
 0 0
∂a1 ∂ϕ  2   
a1,1 = · = −cr sin ϕ c = c r − sin ϕ  = c2 ra2 ,
  2  −c a cos ϕ −ca sin ϕ
∂ϕ ∂Θ1 Γ313 2
= a3,1 · a3 =  −c a sin ϕ  ·  ca cos ϕ  = 0.
0 0
0 cr
the £rst derivative of the second base vector a 2 is given by
  With this results the coef£cient matrix of the Christoffel symbols could be represented by
sin ϕ  
∂a2 ∂ϕ 0 c2 r 0
a2,1 = · = c − cos ϕ ,
∂ϕ ∂Θ1
0 [Γi1r ] = −c2 r 0 −c2 a .
0 c2 a 0
and £nally the £rst derivative of the third base vector a 3 is given by
    6.6.5 Forces and Moments at an Arbitrary sectional area
−ca cos ϕ − cos ϕ
∂a3 ∂ϕ An arbitrary line element of the spiral staircase is given by the points P , and Q. These points
a3,1 = · = c  −ca sin ϕ  = c a  − sin ϕ  = c2 aa2 .
2
∂ϕ ∂Θ1 are represented by the vectors of position x, and x + dx. At the point P the moving trihedron is
0 0
given by the orthonormal base vectors t, n, and b. The forces −N, N + dN, and the moments
Because the moving trihedron ai is an orthonormal basis, it is not necessary to differentiate −M, M+dM at the sectional areas are given like in the sketch of £gure (6.15). The line element
between the co- and contravariant base vectors, i.e. ai = ai , and with this the de£nition of the is load by a vector f dΘ1 . The equilibrium of forces in vector notation is given by
Christoffel symbols is given by
N + dN − N + f dΘ1 = 0 ⇒ dN + f dΘ1 = 0,
Γi1k k
= ai,1 · a = ai,1 · ak .
with the £rst derivative of the force vector w.r.t. to the variable Θ 1 represented by
The various Christoffel symbols are computed like this, ∂N
dN = dΘ1 = N,1 dΘ1 ,
Γ111 2
= a1,1 · a1 = c ra2 · a1 = 0, ∂Θ1
Γ112 = a1,1 · a2 = c2 ra2 · a2 = c2 r, the equilibrium condition becomes
Γ113 = a1,1 · a3 = c2 ra2 · a3 = 0, (N,1 + f ) dΘ1 = 0.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
204 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 205

This equation system represents three component equations, one for each direction of the basis
of the moving trihedron, £rst for i = 1, and k = 1, . . . , 3,
I ±
−M
b = a3 (Θ1 ) N,11 + N 1 Γ111 + N 2 Γ211 + N 3 Γ311 + f 1 = 0,

* with the computed values for the Christoffel symbols from above,
n = a2 (Θ1 )
- ¡ ¢
Θ1 N,11 + 0 + N 2 −c2 r + 0 + f 1 = 0,
µ
−N P t = a1 (Θ1 ) and £nally
) q
1
dx = a1 dΘ1 q
f dΘ N,11 − c2 rN 2 + f 1 = 0.
x = x (Θ1 ) R
1
e3 6 Q The second case for i = 2, and k = 1, . . . , 3 implies
N + dN
q
N,12 + N 1 Γ112 + N 2 Γ212 + N 3 Γ312 + f 2 = 0,
x-+ dx = x (Θ1 ) + a1 dΘ1 ¡ ¢ ¡ ¢
N,12 + N 1 c2 r + 0 + N 3 c2 a + f 2 = 0,
O e2 M + dM
e1 N,12 + c2 rN 1 + c2 aN 3 + f 2 = 0.
ª R
The third case for i = 3, and k = 1, . . . , 3 implies
Figure 6.15: An arbitrary line element with the forces, and moments in its sectional areas.
N,13 + N 1 Γ113 + N 2 Γ213 + N 3 Γ313 + f 3 = 0,
¡ ¢
N,13 + 0 + N 2 −c2 a + 0 + f 3 = 0,
This equation rewritten in index notation, with the force vector N = N i ai given in the basis of
N,13 − c2 aN 2 + f 3 = 0.
the moving trihedron at point P , implies
³¡ ¢ ´ All together the coef£cient scheme of the equilibrium of forces in the basis of the moving trihe-
N i ai ,1 + f i ai dΘ1 = 0, dron is given by    
N,11 − c2 rN 2 + f 1 0
with the chain rule, N,12 + c2 rN 1 + c2 aN 3 + f 2  = 0 .
N,13 − c2 aN 2 + f 3 0
N,1i ai + N i ai,1 + f i ai = 0 The equilibrium of moments in vector notation w.r.t. the point P is given by
N,1 ai + N i Γi1k ak
i
+ f i ai = 0 1
−M + M + dM + a1 dΘ1 × (N + dN) + a1 dΘ1 × f dΘ1 = 0,
2
and after renaiming the dummy indices, 1
⇒ dM + a1 dΘ1 × N + a1 × dNdΘ1 + a1 × f dΘ1 dΘ1 = 0,
¡ ¢ 2
N,1i + N k Γk1i ai + f i ai = 0.
and in linear theory, i.e. neglecting the higher order terms, e.g. terms with dNdΘ 1 , and with
With the covariant derivative de£ned by dΘ1 dΘ1 , the equilibrium of moments is given by

N,1i + N k Γk1i = N i |1 , dM + a1 dΘ1 × N = 0.

the equilibrium condition could be rewritten in index notation only for the components, With the £rst derivative of the moment vector w.r.t. to the variable Θ 1 given by
∂M 1
N i |1 +f i = N,1i + N k Γk1i + f i = 0. dM = dΘ = M,1 dΘ1 ,
∂Θ1
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
206 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 207

the equilibrium condition becomes 6.6.6 Forces and Moments for the Given Load
1
(M,1 + a1 × N) dΘ = 0. The unkown forces N and moments M w.r.t. the natural basis a i should be computed, i.e.
The cross product of the £rst base vector a 1 of the moving trihedron and the force vector N is
given by N = N i ai , and M = M i ai .
¡ ¢
a1 × N = a 1 × N i ai = N 1 a1 × a 1 + N 2 a1 × a 2 + N 3 a1 × a 3 The load f is given in the global, Cartesian coordinate system by
¡ ¢
= 0 + N 2 a3 + N 3 −a2 = N 2 a3 − N 3 a2 ,
a1 × N = N 2 a3 − N 3 a 2 , f = f¯i ei ,

because the base vectors ai form an orthonormal basis. The following steps are the same as the and is acting at a point S, given by an angle ϕ2 , and a radius rS , see also the free-body dia-
ones for the force equilibrium equations, gram given by £gure (6.16). The free-body diagram and the load vector are given in the global
¡ ¢
M,1 + N 2 a3 − N 3 a2 dΘ1 = 0,
¾
R -
M,1i ai + M i ai,1 + N 2 a3 − N 3 a2 = 0,
¾ r - M̄ 2
e1 ?
¾ e3 ?
top, height h/2 bottom, £xed support
and £nally ϕ
2 R
¡ ¢ ϕ
M,1i + M k Γk1 ai + N 2 a3 − N 3 a2 = M i |1 ai + N 2 a3 − N 3 a2 = 0. ?e2 µ
i
N̄ 2
Again this equation system represents three equations, one for each direction of the basis of the ?N̄ 3 N̄ 1 M̄ 1
rS M̄ 3 ¾ ¾¾
moving trihedron, £rst for i = 1, i.e. in the direction of the base vector a 1 , and k = 1, . . . , 3, R
¾
R̄1 ®S
T
M,11 + M 1 Γ111 + M 2 Γ211 + M 3 Γ311 = 0, 3
¡ ¢ R̄ R̄2
M,11 + 0 + M 2 −c2 r + 0 = 0,
M,11 − c2 rM 2 = 0, ?

in direction of the base vector a2 , i.e. for i = 2, and k = 1, . . . , 3, Figure 6.16: The free-body diagram of the loaded spiral staircase.

M,12 + M 1 Γ112 + M 2 Γ212 + M 3 Γ312 − N 3 = 0,


¡ ¢ ¡ ¢
M,12 + M 1 c2 r + 0 + M 3 c2 a − N 3 = 0, Cartesian basis, i.e. the load vector is given by
M,12 + c2 rM 1 + c2 aM 3 − N 3 = 0.    
R̄1 0
and £nally in the direction of the third base vector a 3 , i.e. i = 3, and k = 1, . . . , 3, R̄ = R̄i ei , with R̄2  =  0  .
R̄3 −qϕr
M,13 +M 1
Γ113
+M 2
+M Γ213 3
+ N = 0,Γ313 2
¡ ¢
M,13 + 0 + M 2 −c a + 0 + N 2 = 0,
2
First the equilibrium conditions of forces in the directions of the base vectors e i of the global
M,13 − c2 aM 2 + N 2 = 0. Cartesian coordinate system are established,
All together the coef£cient scheme of the equilibrium of moments in the basis of the moving X
trihedron is given by Fe1 = 0 ÃN̄ 1 + R̄1 = 0 ÃN̄ 1 = 0,
    X
M,11 − c2 rM 2 0 Fe2 = 0 ÃN̄ 2 + R̄2 = 0 ÃN̄ 2 = 0,
2 2 1 2 3 3
M,1 + c rM + c aM − N  = 0 . X
M,13 − c2 aM 2 + N 2 0 Fe3 = 0 ÃN̄ 3 + R̄3 = 0 ÃN̄ 3 = −R̄3 .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
208 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 209

Than the equilibrium conditions of moments in the directions of the base vectors e i of the global and in matrix notation
Cartesian coordinate system w.r.t. the point T are established £ k¤ £ k¤ r £ ¤
δi = T.r [S.i ] Ã T.rk = [S.ir ]−1 .
X ³ ³ π´ ϕ´
Me(T1 ) = 0 Ã R̄3 −r sin ϕ − + rS sin + M̄ 1 = 0 Because both basis, i.e. ei , and ai , are orthonormal basis, the tensor T must describe an or-
³ ³ 2π ´ 2 ´
ϕ
thonormal rotation! The tensor of an orthonormal rotation is characterized by
ÃM̄ 1 = R̄3 r sin ϕ − − rS sin , T−1 = TT , resp. T = S−1 = ST ,
³ 2 ³ 2
X
(T ) 3 ϕ π ´´
Me 2 = 0 Ã − R̄ rS cos + r cos ϕ − + M̄ 2 = 0 i.e. in matrix notation
³ 2 ³ 2 ´´  
ϕ π −cr sin ϕ cr cos ϕ −ca
ÃM̄ 2 = R̄3 rS cos − r cos ϕ − ,
2 2 [T.ir ] = [S.ir ]−1 = [S.ir ]T = − cos ϕ − sin ϕ
 0 .
X
(T )
Me3 = 0 ÃM̄ = 0.3 −ca sin ϕ ca cos ϕ cr
With this the relations between the base vectors of the different basis could be given by
Finally the resulting equations of equilibrium are given by
ai = S.ir er , resp. er = T.rk ak ,
   3¡ ¡ ¢ ¢
0 R̄ ¡ r sin ϕ − π2 − ¡rS sin ϕ2¢¢ and e.g. in detail
ϕ π
N̄ =  0  , and M̄ = R̄3 rS cos 2 − r cos ϕ − 2  .   
−cr sin ϕ − cos ϕ −ca sin ϕ a1
−R̄3 0 £ ¤T
[er ] = T.rk [ak ] =  cr cos ϕ − sin ϕ ca cos ϕ  a2  ,
Now the problem is, that the equilibrium conditions are given in the global Cartesian coordinate −ca 0 cr a3
system, but the results should be descirbed in the basis of the moving trihedron. For this reason it or
is necessary to transform the results from above, i.e. N = N̄ i ei , and M = M̄ i ei , into N = N i ai , e1 = −cr sin ϕ a1 − cos ϕ a2 − ca sin ϕ a3
and M = M i ai ! The Cartesian basis ei should be transformed by a tensor S into the basis ai ,
e2 = cr cos ϕ a1 − sin ϕ a2 + ca cos ϕ a3
S = S rs er ⊗ es à ai = Sei = S rs δsi er = S.ir er , e3 = −ca a1 + cr a3 .
With this it is easy to compute the £nal results, i.e. to transform N = N̄ i ei , and M = M̄ i ei , into
i.e. in matrix notation N = N i ai , and M = M i ai . With the known transformation ei = T.ik ak the force vector could
  be represented by
−cr sin ϕ − cos ϕ −ca sin ϕ
[ai ] = [S.ir ]T [er ] , with [S.ir ] =  cr cos ϕ − sin ϕ ca cos ϕ  , N = N̄ i ei = N̄ i T.ik ak = N k ak .
−ca 0 cr Comparing only the coef£cients implies
N k = T.ik N̄ i ,
see also the de£nitions for the base vectors of the moving trihedron in the sections above! Then
the retransformation from the basis of the moving trihedron into the global Cartesian basis should and with this the coef£cients of the force vector N w.r.t. the basis of the moving trihedron in the
be given by S−1 = T, with sectional area at point T are given by
 1     1  3 
N −cr sin ϕ cr cos ϕ −ca 0 N R̄ ca
ei = Tai , with T = T.sr ar ⊗ as , N 2  =  − cos ϕ − sin ϕ 0   0  ⇒ N 2  =  0  .
ei = (T.sr ar ⊗ as ) ai = T.sr δis ar = T.ir ar . N3 −ca sin ϕ ca cos ϕ cr −R̄3 N3 −R̄3 cr

Comparing this transformation relations implies By the analogous comparison the coef£cients of the moment vector M w.r.t. the basis of the
moving trihedron in the sectional area at point T are given by
 1    1  1  ¡ ¢
ai = S.ir er = S.ir T.rm am | ·ak , M −cr sin ϕ cr cos ϕ −ca M̄ M cr ¡−M̄ 1 sin ϕ + M̄ 2 cos ϕ¢
2
M  =  − cos ϕ − sin ϕ 2 2 1 2
δik = S.ir T.rm δm
k
, 0  M̄  ⇒ M  =  −¡ M̄ cos ϕ + M̄ sin ϕ ¢ .
M3 −ca sin ϕ ca cos ϕ cr 0 M3 ca −M̄ 1 sin ϕ + M̄ 2 cos ϕ
δik = T.rk S.ir ,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
210 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 211

6.7 Tensors, Stresses and Cylindrical Coordinates the stress tensor σ (, for the geometrically linear theory,) is given by
 
6.7.1 The Problem £ ¤ 6 0 2
ij ij
σ = σ gi ⊗ g j , with σ = 0 0 0  .
The given cylinderical shell is described by the parameter lines Θ i , with i = 1, 2, 3. The relations 2 0 5

The vector of position x in the Cartesian coordinate system is given by

x = x i ei ,

and the normal vector n in the Cartesian coordinate system at the point P is given by

g3 g1 + g 3
* : n= = n r er .
n |g1 + g3 |
P
g1 • Compute the covariant base vectors gi and the contravariant base vectors g i at the point P .
ª R
g2
¡ ¢
3 • Determine the coef£cients of the tensor σ w.r.t. the basis in mixed formulation, gi ⊗ gk ,
(8, 0, 4)Θ and w.r.t. the Cartesian basis, (ei ⊗ ek ).
Θ1 º e1 x
6
Θ3
* Θ2 • Work out the physical components from the contravariant stress tensor.
- - *
e3
(0, 0, 0)Θ (0, 2, 0)Θ e2 ¼ (8, 2, 0)Θ (8, 0, 0)Θ
4 • Determine the invariants,

2 3 3 2 [cm] Iσ = tr σ,
¾ -¾ -¾ -¾ -¼
1¡ ¢
Figure 6.17: The given cylindrical shell. IIσ = (tr σ)2 − tr (σ)2 ,
2
IIIσ = det σ,

between the Cartesian coordinates and the parameters, i.e. the curvilinear coordinates, are given for the three different representations of the stress tensor σ.
by the vector of position x,
³π ´ • Calculate the principal stresses and the principal stress directions.
¡ ¢
x1 = 5 − Θ2 sin Θ1 ,
a • Compute the speci£c deformation energy W spec = 12 σ : ε at the point P , with
x2 = −Θ3 , and
¡ ¢ ³π ´
x3 = − 5 − Θ2 cos Θ1 , 1
a ε= (gik − δik ) gi ⊗ gk .
100
where a = 8.0 is a constant length. At the point P de£ned by
• Determine the stress vector tn at the point P w.r.t. the sectional area given by the normal
µ ¶
8 2 vector n. Furthermore calculate the normal stress t⊥ , and the resulting shear stress tk for
P =P Θ1 = ; Θ 2 = ; Θ 3 = 2 , this direction.
10 10

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
212 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 213

6.7.2 Co- and Contravariant Base Vectors The contravariant base vectors g i are de£ned by
The vector of position x is given w.r.t. the Cartesian base vectors e i . The covariant base vectors gi = g ik gk ,
gi of the curvilinear coordinate system are de£ned by the £rst derivatives of the vector of position
w.r.t. the parameters Θi of the curvilinear coordinate lines in space, i.e. and for i = 1, . . . , 3 the relations between co- and contravariant base vectors are given by
i
∂x ∂x
gk = = ei . g 1 = b 2 g1 , g 2 = g2 , and g3 = g3 ,
∂Θk ∂Θk
With this de£nition the covariant base vectors are computed by and £nally with abbreviations
∂x ¡ ¢π ³π ´ ¡ ¢π ³π ´   
  
g1 = 1
= 5 − Θ2 cos Θ 1 e1 + 0 e 2 + 5 − Θ2 sin Θ 1 e3 , cos c − sin c 0
∂Θ a a a a g1 = b  0  , g2 =  0  , and g3 = −1 ,
∂x ³π ´ ³π ´
g2 = 2
= − sin Θ 1 e1 + 0 e2 + cos Θ 1 e3 , sin c cos c 0
∂Θ a a
∂x or in detail
g3 = = 0 e1 + (−1) e2 + 0 e3 ,
∂Θ3  ¡ π 1 ¢  ¡π ¢  
cos a Θ − sin Θ1 0
and £nally the covariant base vectors of the curvilinear coordinate system are given by 1 (5 − Θ2 ) π  2
a
3
 ¡ π 1 ¢  ¡ ¢   g = ¡0 ¢
 , g = ¡0π
 , and g = −1 .
cos a Θ − sin πa Θ1 0 a ¢
sin πa Θ1 cos Θ1 0
2 π 
¡ ¢ a
g1 = 5 − Θ 0  , g 2 =  0  , and g 3 = −1 ,
a ¡ ¢ ¡ ¢
sin πa Θ1 cos πa Θ1 0 At the given point P the co- and contravariant base vectors g i , and gi , are given by
or by      
 ¡ π ¢
cos 10
 ¡ π ¢
cos 10
cos c − sin c 0 3π  5
1 g1 = 0¡ ¢  , g1 =  0 ,
g1 = 0  , g2 =  0  , and g3 = −1 ,
 5 π 3π ¡π¢
b sin 10 sin 10
sin c cos c 0  ¡ π
¢ 
− sin 10
with the abbreviations 2
g2 = g =  0¡ ¢  , and
a 5 π 1 π cos 10 π
b= = , and c = Θ = .
(5 − Θ2 ) π 3π a 10  
0
In order to determine the contravariant base vectors of the curvilinear coordinate system, it is g3 = g3 = −1 .
necessary to multiply the covariant base vectors with the contravariant metric coef£cients. The 0
contravariant metric coef£cients g ik could be computed by the inverse of the covariant metric
coef£cients g ik ,
£ ¤−1 6.7.3 Coef£cients of the Various Stress Tensors
[gik ] = g ik .
So the £rst step is to compute the covariant metric coef£cients g ik , The stress tensor σ is given by the covariant basis gi of the curvilinear coordinate system, and
1  the contravariant coef£cients σ ij . The stress tensor w.r.t. to the mixed basis is determined by
b2
0 0
gik = gi · gk , i.e. [gik ] =  0 1 0 . σ = σ im gi ⊗ gm , and gm = gmk gk ,
0 0 1 σ = σ im gmk gi ⊗ gk = σ ik gi ⊗ gk ,
The relationship between the co- and contravariant metric coef£cients is used in its inverse form,
in order to compute the contravariant metric coef£cients, i.e. the coef£cient matrix is given by
 2    1 
£ ik ¤ £ ik ¤ b 0 0 £ ¤ £ ¤ 6 0 2 b2
0 0
−1
g = [gik ] , resp. g =  0 1 0 . σ ik = σ im
[gmk ] = 0 0 0  0 1 0 .
0 0 1 2 0 5 0 0 1

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
214 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 215

Solving this matrix product implies the coef£cient matrix [σ ik ] w.r.t. the basis gi ⊗ gk , i.e. the Solving this matrix product implies the coef£cient matrix of the stress tensor w.r.t. the Cartesian
stress tensor w.r.t. to the mixed basis is given by basis, i.e. the stress tensor w.r.t. the Cartesian basis is given by
6   6 
0 2 b2
cos2 c − 2b cos c b62 sin c cos c
£ i ¤ b2
i k
σ = σ k gi ⊗ g , with σ k = 0 0 0 ,
 σ = σ̃ik ei ⊗ ek , with [σ̃rs ] =  − 2b cos c 5 − 2b sin c  ,
2
0 5 b
6
2 sin c cos c − 2
b
sin c 6
b2
sin2 c
b2

and £nally at the point P the stress tensor w.r.t. the mixed basis is given by and £nally at the point P the stress tensor w.r.t. the Cartesian basis is given by
 54π2 π 54π 2

 54π2  cos2 10 − 6π π
cos 10 sin 10π π
cos 10
25
0 2 25
6π π
5 25
6π π
i k
£ i ¤ σ = σ̃ik ei ⊗ ek , with [σ̃rs ] =  − 5 cos 10 5 − 5 sin 10  .
σ = σ k gi ⊗ g , with σk =  0 0 0 . 54π 2 54π 2
18π 2
π π 6π
sin 10 cos 10 − 5 sin 10 π
sin2 10
π
25
0 5 25 25

The relationships between the Cartesian coordinate system and the curvilinear coordinate system 6.7.4 Physical Components of the Contravariant Stress Tensor
are described by
The physical components of a tensor are de£ned by
gi = Bei , and B = Bmn em ⊗ en , √
∗ g(k)(k)
gi = (Bmn em ⊗ en ) ei = Bmn δin em = Bmi em , τ ik = τ ik p ,
g (i)(i)
and because of the identity of co- and contravariant base vectors in the Cartesian coordinate ∗
see also the lecture notes. The physical components τ ik of a tensor τ = τ ik gi ⊗ gk consider,
system, that the base vectors of an arbitrary curvilinear coordinate system do not have to be unit vectors!
In Cartesian coordinate systems the base vectors ei do not in¤uence the physical value of the
gi = Bki ek = Bki ek . components of the coef£cient matrix of a tensor, because the base vectors are unit vectors and
orthogonal to each other. But in general coordinates the base vectors do in¤uence the physical
This equation represented by the coef£cient matrices is given by
value of the components of the coef£cient matrix, because they are in general no unit vectors,
1  and not orthogonal to each other. Here the contravariant stress tensor is given by
b
cos c − sin c 0
T £ k¤
[gi ] = [Bki ] e , with [Bki ] =  0 0 −1 , 
6 0 2

1
b
sin c cos c 0 σ = σ ij gi ⊗ gk , with
£ ¤
σ ik = 0 0 0 .
see also the de£nition of the covariant base vectors above. The stress tensor σ w.r.t. the Cartesian 2 0 5
basis is computed by In order to compute the physical components of the stress tensor σ, it is necessary to solve the
ik r s de£nition given above. The numerator and denominator of this de£nition are given by the square
σ = σ gi ⊗ g k , and gi = Bri e , resp. gk = Bsk e ,
roots of the co- and contravariant metric coef£cients g (i)(i) , and g (i)(i) , i.e.
σ = σ ik Bri Bsk er ⊗ es ,
√ 1 √ √
g11 = g22 = 1 g33 = 1,
and with the abbreviation p b p p
g 11 =b g 22 = 1 g 33 = 1.
ik
σ̃rs = Bri σ Bsk ,
Finally the coef£cient matrix of the physical components of the contravariant stress tensor σ =
the coef£cient matrix of the stress tensor w.r.t. the Cartesian basis is de£ned by σ ik gi ⊗ gk is given by
6 
1  1 1
 √ 0 2b
b
cos c − sin c 0 6 0 2 b
cos c 0 b
sin c ∗ ik ik
g(k)(k) h∗ i
ik
b2
£ ik ¤ T
[σ̃rs ] = [Bri ] σ [Bsk ] =  0 0 −1 0 0 0 − sin c 0 cos c  . σ =σ p , and σ =  0 0 0 .
1
g (i)(i) 2
0 5
b
sin c cos c 0 2 0 5 0 −1 0 b

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
216 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 217

6.7.5 Invariants or in just one step


The stress tensor could be described w.r.t. the three following basis, i.e. ¡ ¢
tr σ 2 = σ T : σ = σ ki gi ⊗ gk : (σ rs gr ⊗ gs ) = σ ki σ rs gir gks = σ kr σ rk .
• σ = σ ik gi ⊗ gk , w.r.t. the covariant basis of the curvilinear coordinate system,
Finally the second invariant w.r.t. the covariant basis of the curvilinear coordinate system
• σ = σ ik gi ⊗ gk , w.r.t. the mixed basis of the curvilinear coordinate system, and is given by

• σ = σ̃ik ei ⊗ ek , w.r.t. the basis of the Cartesian coordinate system. 1¡ ¢


IIσ = (tr σ)2 − tr σ 2
2"
The £rst invariant I σ of the stress tensor is de£ned by the trace of the stress tensor, i.e. µ ¶2 µ ¶# · ¸
1 6 36 8 1 36 60 36 8
= + 5 − + + 25 = + + 25 − − − 25 ,
Iσ = tr σ = σ : 1. 2 b2 b4 b2 2 b4 b2 b4 b2
• The £rst invariant w.r.t. the covariant basis of the curvilinear coordinate system is given by 26
IIσ = .
¡ ¢ ¡ ¢ b2
Iσ = tr σ = σ ik gi ⊗ gk : gml gl ⊗ gm = σ ik gml δil δkm = σ ik gki = σ ii ,
6 • Again £rst in order to determine tr σ 2 it is necessary to compute σ 2 , i.e.
Iσ = tr σ = σ ii = 2 + 5.
b ¡ ¢
σ 2 = σ ik gi ⊗ gk (σ rs gr ⊗ gs ) = σ ik σ rs δrk gi ⊗ gs = σ ik σ ks gi ⊗ gs ,
• The £rst invariant w.r.t. the mixed basis of the curvilinear coordinate system is given by
¡ ¢ ¡ ¢ and then
Iσ = tr σ = σ ik gi ⊗ gk : gml gl ⊗ gm = σ ik gml δil g km = σ ik δil δkl = σ ii , ¡ ¢ ¡ ¢
6 tr σ 2 = σ 2 : 1 = σ ik σ ks gi ⊗ gs : gml gl ⊗ gm = σ ik σ ks gml δil g sm = σ ik σ ki ,
Iσ = tr σ = σ ii = 2 + 5.
b
or in just one step
• The £rst invariant w.r.t. the mixed basis of the Cartesian coordinate system is given by ¡ ¢
tr σ 2 = σ T : σ = σki gi ⊗ gk : (σ rs gr ⊗ gs ) = σki σ rs gir g ks = σ sr σ rs .
Iσ = tr σ = (σ̃ik ei ⊗ ek ) : (δml em ⊗ el ) = σ̃ik δml δim δkl = σ̃ii ,
6 This intermediate result is the same like above, and the £rst invariants, i.e. the trace tr σ
Iσ = tr σ = σ̃ii = 2 + 5. and (tr σ)2 , too, are equal for the different basis, i.e. all further steps will be the same like
b
above. Combining all this £nally implies, that the second invariant w.r.t. the mixed basis
The second invariant IIσ of the stress tensor is de£ned by the half difference of the trace to the of the curvilinear coordinate system is given by
second of the stress tensor, and the trace of the stress tensor to the second, i.e.
26
1¡ ¢ IIσ = , too.
IIσ = (tr σ)2 − tr σ 2 . b2
2
• First in order to determine tr σ 2 it is necessary to compute σ 2 , i.e. • Again £rst in order to determine tr σ 2 it is necessary to compute σ 2 , i.e.
¡ ¢
σ 2 = σ ik gi ⊗ gk (σ rs gr ⊗ gs ) = σ ik σ rs gkr gi ⊗ gs , σ 2 = (σ̃ik ei ⊗ ek ) (σ̃rs er ⊗ es ) = σ̃ik σ̃rs δkr ei ⊗ es = σ̃ik σ̃ks ei ⊗ es ,

and then and then


¡ ¢ ¡ ¢
tr σ = σ : 1 = σ σ gkr gi ⊗ gs : gml gl ⊗ gm
2 2 ik rs
tr σ 2 = σ 2 : 1 = (σ̃ik σ̃ks ei ⊗ es ) : (δlm el ⊗ em ) = σ̃ik σ̃ks δlm δil δsm = σ̃ik σ̃ki ,
ik rs
=σ σ gkr gml δil δsm = ik rs
σ σ gkr gsi = σ ir σ ri
µ ¶2 µ ¶ or in just one step
6 2 36 8
tr σ 2 = +2 2 2 + 25 = + 2 + 25,
b2 b b4 b tr σ 2 = σ T : σ = (σ̃ik ei ⊗ ek ) : (σ̃rs er ⊗ es ) = σ̃ki σ̃rs δir δks = σ̃sr σ̃rs .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
218 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 219

Solving this implies the same intermediate results, In order to solve this equation, £rst the new tensor υ T is computed by
µ ¶ µ ¶ µ ¶ ¡ ¢
36 4 36 2 4 36 υ T = σ T σ T = σ ki gi ⊗ gk (σ sr gr ⊗ gs ) = σ ki σ sr gkr gi ⊗ gs
tr σ 2 = 4 cos4 c + 2 2 cos2 c + 2 sin c cos 2
c + sin 2
c + 4 sin4 c + 25
b b b4 b2 b υ si gi ⊗ gs = σ ki σ sk gi ⊗ gs = σ sk σ ki gi ⊗ gs ,
36 ¡ 4 2 2 4
¢ 8 ¡ 2 2
¢
= 4 cos c + 2 sin c cos c + sin c + 2 cos c + sin c + 25 and the coef£cient matrix of this new tensor is given by
b b £ si ¤ £ is ¤T £ ¤
36 8 υ = υ = [σ sk ] σ ki ,
tr σ 2 = 4 + 2 + 25, 6    36 
b b 0 2 6 0 2 +4 0 12
+ 10
b2 b2 b2
i.e. the further steps are the same like above. And with this £nally the second invariant [υsi ] =  0 0 0 0 0 0 =  0 0 0 .
2 12 4
w.r.t. the basis of the Cartesian coordinate system is given by b2
0 5 2 0 5 b2
+ 10 0 b2
+ 25
26 In order to solve the scalar product υ T : σ, given by
IIσ = , too.
b2 ¡ ¢ ¡ ¢ ¡ ¢
υ T : σ = σ T σ T : σ = υ si gi ⊗ gs : σ lm gl ⊗ gm ,
The third invariant IIIσ of the stress tensor is de£ned by the determinant of the stress tensor, i.e. υ T : σ = υ si σ lm gil gsm = υ si gil σ ls = υ sl σ ls ,
1 1 ¡ ¢ 1 ¡ T T¢ £rst the coef£cient matrix of the tensor υ w.r.t. the mixed basis is computed by
IIIσ = det σ = (tr σ)3 − (tr σ) tr σ 2 + σ σ : σ. £ ¤
6 2 3 [υ sl ] = υ si [gil ] ,
• In order to compute the third invariant it is necessary to solve the three terms in the sum-  36  1   36 
b2
+ 4 0 12 b2
+ 10 b2
0 0 b4
+ b42 0 12b2
+ 10
mation of the de£nition of the third invariant. The £rst term is given by s
[υ l ] =  0 0 0   0 1 0 =  0 0 0 ,
12
µ ¶3 µ ¶
b2
+ 10 0 b42 + 25 0 0 1 12
b4
+ 10
b2
0 b42 + 25
1 1 6 1 216 540 450
(tr σ)3 = + 5 = + + + 125 and then the £nal result for the third term is given by
6 6 b2 6 b6 b4 b2
1 36 90 75 125 1 ¡ T T¢ 1 1 1
(tr σ)3 = 6 + 4 + 2 + , σ σ : σ = υ T : σ = σ kl σ ls σ sk = υ sl σ ls
6 b b b 6 3 3 ·µ 3¶ µ 3 ¶ µ ¶ µ ¶¸
and the second term is given by 1 36 4 6 12 2 12 10 4
= + + + 10 + 2 + + 5 + 25
µ ¶µ ¶ 3 b4 b2 b2 b2 b2 b4 b2 b2
1 ¡ ¢ 1 6 36 8 1 ¡ T T¢ 72 24 20 125
− (tr σ) tr σ 2 = − + 5 + + 25 σ σ :σ= 6 + 4 + 2 + .
2 2 b2 b4 b2 3 b b b 3
µ ¶
1 ¡ ¢ 108 114 95 125 Then the complete third invariant w.r.t. the covariant basis of the curvilinear coordinate
− (tr σ) tr σ 2 = − + + + .
2 b6 b4 b2 2 system is given by
The third term is not so easy to compute, because it does not include the trace, but a scalar 1 1 ¡ ¢ 1 ¡ T T¢
IIIσ = det σ = (tr σ)3 − (tr σ) tr σ 2 + σ σ :σ
product and a tensor product with the transpose of the stress tensor, i.e. 6
µ 2 ¶ µ3 ¶
36 90 75 125 108 114 95 125
1 ¡ T T¢ 1 = + + + − + + +
σ σ : σ = υT : σ , with υ T = σ T σ T = (σσ)T . b6 b4 b2 6 b6 b4 b2 2
3 3 µ
72 24 20 125

or in index notation + + 4 + 2 +
b6 b b 3
¡ T T¢ ¡ ¢ ¡ ¢ 1 1 1
σ σ : σ = σ ki gi ⊗ gk (σ sr gr ⊗ gs ) : σ lm gl ⊗ gm = 6 (36 − 108 + 72) + 4 (90 − 114 + 24) + 2 (75 − 95 + 20)
¡ ki sr ¢ ¡ lm ¢ ¡ ki s ¢ ¡ ¢ bµ b b
= σ σ gkr gi ⊗ gs : σ gl ⊗ gm = σ σ k gi ⊗ gs : σ lm gl ⊗ gm 125 125 125

+ − +
= σ ki σ sk σ lm gil gsm = σ kl σ sk σ ls , 6 2 3
¡ T T
¢
σ σ : σ = σ kl σ ls σ sk . IIIσ = det σ = 0.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
220 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 221

• For the third invariant w.r.t. the mixed basis of the curvilinear coordinate system the £rst 6.7.6 Principal Stress and Principal Directions
and second term are already known, because all scalar quantities, like tr σ, are still the
Starting with the stress tensor w.r.t. the Cartesian basis, and a normal unit vector, i.e.
same, see also the £rst case of determining the third invariant. It is only necessary to have
a look at the third term given by σ = σ̃ik ei ⊗ ek , and n 0 = n r er = n r er ,
¡ T T¢ ¡ ¢ ¡ ¢
σ σ : σ = σ ik gk ⊗ gi (σ rs gs ⊗ gr ) : σ lm gl ⊗ gm
¡ i r s k ¢ ¡ l ¢ ¡ ¢ ¡ ¢ the eigenvalue problem is given by
= σ k σ s δi g ⊗ gr : σ m gl ⊗ gm = σ sk σ rs gk ⊗ gr : σ lm gl ⊗ gm
σn0 = λn0 ,
= σ sk σ rs σ lm δlk δrm = σ sl σ ms σ lm ,
¡ ¢ and the left-hand side is rewritten in index notation,
σ T σ T : σ = σ sl σ lm σ ms ,
i.e. this scalar product is the same like above. With all three terms of the summation given (σ̃ik ei ⊗ ek ) nr er = σ̃ik nr ei δkr = σ̃ir nr ei .
by the same scalar quantities like above, the third invariant w.r.t. the mixed basis of the
curvilinear coordinate system is given by The eigenvalue problem in index notation w.r.t. the Cartesian basis is given by

IIIσ = det σ = 0. σ̃ik nk ei = λnk ek | ·el ,


• The third case, i.e. the third invariant w.r.t. to the basis of the Cartesian coordinate system, σ̃ik nk δil = λnk δkl ,
is very easy to solve, because in a Cartesian coordinate system it is suf£cient to compute σ̃lk nk = λnk δlk ,
the determinant of the coef£cient matrix of the tensor! For example the determinant of the
coef£cient matrix σ̃ rs is expanded about the £rst row and £nally the characteristic equation is given by
¯ 6 ¯
¯ 2 cos2 c
¯ b 2 − 2b cos c b62 sin c cos c¯¯ (σ̃lk − λδlk ) nk = 0,
det σ = det [σ̃rs ] = ¯¯ − b cos c 5 − 2b sin c ¯¯
det (σ̃ik − λδik ) = 0.
¯ 62 sin c cos c − 2 sin c 6
b2
sin2 c ¯
µ b b
¶ µ ¶
6 30 2 4 2 12 12 The characteristic equation of the eigenvalue problem could be rewritten with the already known
= 2 cos2 c 2
sin c − 2 sin2 c + cos c − 3 sin2 c cos c + 3 sin2 c cos c
b b b b b b invariants, i.e.
µ ¶
6 4 30
+ 2 sin c cos c 2 sin c cos c − 2 sin c cos c det (σ̃ik − λδik ) = IIIσ − IIσ λ + Iσ λ2 − λ3 = 0,
b b b 3 2
156 2 156 2 λ − Iσ λ + IIσ λ − IIIσ = 0,
= 4 sin c cos c − 0 − 4 sin c cos2 c
2 µ
6

26
b b λ − 2 + 5 λ2 + 2 λ − 0 = 0,
3
det σ = det [σ̃rs ] = 0. b b
µ µ ¶ ¶
The third invariant w.r.t. the basis of the Cartesian coordinate system is given by 6 26
λ λ2 − 2 + 5 λ + 2 = 0,
b b
IIIσ = det σ = 0.
this implies the £rst eigenvalue, i.e. the £rst principal stress,
The £nal result is, that the invariants of every arbitrary tensor σ could be computed w.r.t. any
basis, and still keep the same, i.e. λ1 = 0.
6
Iσ = tr σ =σ:1 = 2 + 5,
b The other eigenvalues are computed by solving the quadratic equation
1¡ ¢ 1¡ ¢ 26
IIσ = (tr σ)2 − tr σ 2 = (σ : 1)2 − σ T : σ = 2, µ
6

26
2 2 b λ2 − 2 + 5 λ + 2 = λ2 − dλ + e = 0,
1 1 ¡ ¢ 1 ¡ T T¢ b b
IIIσ = det σ = (tr σ)3 − (tr σ) tr σ 2 + σ σ :σ
6 2 3 µ ¶ s µ ¶2 r
1 1 ¢ 1 ¡ T T¢ 1 6 1 6 26 1 d2
¡
= (σ : 1)3 − (σ : 1) σ T : σ + σ σ : σ = 0. λ2/3 = 2
+ 5 ± 2
+ 5 − 2
= d ± − e,
6 2 3 2 b 4 b b 2 4

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
222 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 223

this implies the second and third eigenvalue, i.e. the second and third principal stress, • The principal stress λ1 = 0 implies
r r
1 d2 1 d2 6 26 n13 = 0 ⇒ n11 = 0 ⇒ n12 = α ∈ R,
λ2 = d + − e , and λ3 = d − − e , with d = 2 + 5 , and e = 2 . n1 = n1 g1 + n2 g2 + n3 g3 = 0 + αg2 + 0 = αg2 ,
2 4 2 4 b b
 
− sin c
In order to compute the principal directions the stress tensor w.r.t. the curvilinear basis, and a
n1 = α  0  .
normal unit vector, i.e.
cos c
σ = σ ik gi ⊗ gk , and n = nr gr ,
q
are used, then the eigenvalue problem is given by • The principal stress λ2 = 12 d + d2
− e implies
4

σn = λn, 1
n13 = β ∈ R ⇒ n12 = 0 n11 = − b2 (5 − λ2 ) β = γβ,

2
and the left-hand side is rewritten in index notation, n2 = n1 g1 + n2 g2 + n3 g3 = βγg1 + 0 + βg3 ,
¡ ¢  
σn = σ ik gi ⊗ gk nr gr = σ ik nr gkr gi = σ ir nr gi . γb cos c
1
n2 = β  −1  , with γ = − b2 (5 − λ2 ) .
2
The eigenvalue problem in index notation w.r.t. the curvilinear basis is given by γb sin c
q
σ̃ ik nk gi = ni gi = nk δki gi , • The principal stress λ3 = 12 d − d2
− e implies
¡ ¢ 4
σ k − λδ k nk gi = 0,
i i
 
¡ i ¢ γb cos c
σ k − λδ ik nk = 0, 1
n3 = β  −1  , with γ = − b2 (5 − λ3 ) .
2
and in matrix notation γb sin c
6  
b2
− λi 0 2 ni1
 0 −λi 0  ni2  = 0. 6.7.7 Deformation Energy
2
b2
0 5 − λi ni3
The speci£c deformation energy is de£ned by
Combining the £rst and the last row of this system of equations yields an equation to determine
the coef£cient n i3 depending on the associated principal stress λi , i.e. 1 1
Wspec = σ : ε , with σ = σ ik gi ⊗ gk , and ε= (gik − δik ) gi ⊗ gk ,
· µ ¶ ¸ · µ ¶ ¸ 2 100
6 4 26 6 £ ¤
(5 − λi ) 2 − λi − 2 ni3 = 2 − λi 5 + 2 + λ2i ni3 = e − λi d + λ2i ni3 = 0. and solving this product yields
b b b b µ ¶
1 1 ¡ lm ¢ 1
The coef£cient n i2 could be computed by the second line of this system of equations, i.e. Wspec = σ : ε = σ gl ⊗ g m : (gik − δik ) gi ⊗ gk
2 2 100
1 lm i k 1 ik
−λi ni2 = 0, = σ (gik − δik ) δl δm = σ (gik − δik )
200 200
1 ik ¡ ¢
and then the coef£cient n i1 depending on the associated principal stress λi and the already known = σ gik − δik ,
200
coef£cient n i3 is given by 1 1 ¡ i ¢
2ni3 Wspec = σ : ε = σ i − σ ii ,
ni1 = − 6 . 2 200
b 2 − λi
because the Kronecker delta δik is given w.r.t. to the Cartesian basis, i.e.
The associated principal direction to the principal stresses are computed by inserting the values
λi in the equations above. δik = δi k = δ ik = δik .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
224 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 225

With the trace of the stress tensor w.r.t. to the mixed basis of the curvilinear coordinate system, implies £nally the stress vector t n at the point P ,
and the trace of the stress tensor w.r.t. to the covariant basis of the curvilinear coordinate system ¡ 6 ¢   
given by b2
+ 2 cos c (6 + 2b2 ) cos c
1 2 1  −2b − 5b3  .
6 tn = √  − − 5b  = √
σ ii = 2 + 5 , and σ ii = 11, b2 + 1 ¡ 6 +b 2¢ sin c b2 b2 + 1 (6 + 2b2 ) sin c
b b2
the speci£c deformation energy is given by
The normal stress vector is de£ned by
µ ¶
1 1 6
Wspec = σ : ε = − 6 , t⊥ = σn , with σ = |t⊥ | = tn · n,
2 200 b2
and £nally at the point P , and the shear stress vector is de£ned by
µ ¶
1 54π 2 tk = t n − t ⊥ .
Wspec = −6 ≈ 0.0766.
200 25
The absolute value of the normal stress vector t⊥ is computed by
   
6.7.8 Normal and Shear Stress (6 + 2b2 ) cos c cos c
1 3  1
The normal vector n is de£ned by σ = tn · n = √  −2b − 5b √  −b 
g1 + g 3 b2 b2 + 1 (6 + 2b2 ) sin c b2 + 1 sin c
n= ,
|g1 + g3 | 5b4 + 4b4 + 6
σ = tn · n = .
and this implies with b2 (b2 + 1)
1 
cos c r This implies the normal stress vector
b 1
g1 + g3 =  −1  , and |g1 + g3 | = + 1,  
1 b2 cos c
b
sin c 5b4 + 4b2 + 6 
t⊥ = σn = √ −b  ,
b2 (b2 + 1) b2 + 1 sin c
£nally  
cos c
1  −b  = nr er = nr er . and the shear stress vector
n= √
b2 + 1 sin c    
(6 + 2b2 ) cos c 4 2 cos c
1 3 5b + 4b + 6
With the stress tensor σ w.r.t. the Cartesian basis, tk = t n − t ⊥ = √  −2b − 5b  − √  −b  ,
b2 b2 + 1 (6 + 2b2 ) sin c b2 (b2 + 1) b2 + 1 sin c
σ = σ̃ik ei ⊗ ek ,  
cos c
4 − 3b2 1
the stress vector tn at the point P is given by tk = √  .
(b2 + 1) b2 + 1 sinb c
tn = σn = σ̃ik ei ⊗ ek nr er = σ̃ik nr δkr ei = σ̃ik nk ei ,

and in index notation


tn = ti ei = σ̃ik nk ei ⇒ ti = σ̃ik nk .
The matrix multiplication of the coef£cient matrices,
 6   
b2
cos2 c − 2b cos c 6
b2
sin c cos c cos c
1
[ti ] = [σ̃ik ] [nk ] =  − 2b cos c 5 2
− b sin c  √  −b  ,
6
sin c cos c − 2b sin c 6
sin2 c b2 + 1 sin c
b2 b2

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
226 Chapter 6. Exercises

Appendix A

Formulary

A.1 Formulary Tensor Algebra


A.1.1 Basis

ei − orthonormal cartesian base vectors ∈ En


gi − covariant base vectors ∈ En
gi − contravariant base vectors ∈ En

A.1.2 Metric Coef£cients, Raising and Lowering of Indices


metric coef£cients raising and lowering of indices
gik = gi · gk gi = gik gk
g ik = gi · gk gi = g ik gk
δki i
= g · gk gi = δki gk
δki i
= e · ek ei = δki ek
(δik = ei · ek ) (ei = δik ek )

A.1.3 Vectors in a General Basis


v = v i gi = v i g i

A.1.4 Second Order Tensors in a General Basis

T = T ik gi ⊗ gk = Tik gi ⊗ gk
= T ik gi ⊗ gk = Ti k gi ⊗ gk

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 227
228 Appendix A. Formulary A.1. Formulary Tensor Algebra 229

A.1.5 Linear Mappings with Tensors A.1.9 Transpose of a Tensor


for the tensor A of rank 1 ¡ ¢
u · (T · v) = v · TT · u
with T = T ik gi ⊗ gk
A=u⊗v
A · w = (u ⊗ v) · w = (v · w) u and TT = T ik gk ⊗ gi = T ki gi ⊗ gk
¡ ¢ ¡ ¢
= ui gi ⊗ v k gk · wm gm = v k gk · wm gm ui gi (u ⊗ v)T = v ⊗ u
¡ k m ¢ i ¡ ¢T
= v w gk m · u g i = v k w k u i g i (A · B)T = BT · AT with AT =A

¡ ¢
for the general tensor T of rank n det T ik 6= 0 A.1.10 Computing the Tensor Components
¡ ¢
T = T ik gi ⊗ gk T ik = gi · T · gk ; Tik = gi · (T · gk ) ; Tki = gi · (T · gk )
¡ ¢
T · w = T ik gi · gk · wm gm = T ik wk gi T ik im kn
= g g Tmn ; Tki im
= g Tmk ; etc.

A.1.11 Orthogonal Tensor, Inverse of a Tensor


A.1.6 Unit Tensor (Identity, Metric Tensor)
orthonormal tensor

QT = Q−1 ; QT · Q = Q−1 · Q = 1 = Q · QT ; det Q = ±1


u=1·u ¡ k ¢−1
i
Qk = Qi ; Qmi · Q0mk = δki
mit 1 = gi ⊗ gi = gj ⊗ gj = δij gj ⊗ gi
v = Q · u → (Q · u) · (Q · u) = u · u ; i.e. v·v =u·u
= g ij gi ⊗ gj = gij gi ⊗ gj
¡ ¢
u = gi ⊗ gi · uk gk = uk g ik gi = uk gk = ui gi = u
A.1.12 Trace of a Tensor

A.1.7 Tensor Product tr (a ⊗ b) := a · b tr (a ⊗ b) = ai gi · bk gk = ai bi


resp.
¡ ¢
tr T = T : 1 = T ik gi ⊗ gk : (gm ⊗ gm ) = T ik gim δkm
= T ik gik = Tii
u=A·w w =B·v Ãu=A·B·v =C·v
und ¡ ¢ ¡ ¢
¡ ¢ tr (A · B) = A : BT or A · B = tr A · BT = tr BT · A
C = A · B = Aik gi ⊗ gk · (Bmn gm ⊗ gn )
tr (A · B) = tr (B · A) = B : AT
= Aik Bmn δkm gi ⊗ gn = Aik Bkn gi ⊗ gk ¡£ ¤ ¢ ¡ ¢
tr (A · B) = tr Aik gi ⊗ gk · [Bmn gm ⊗ gn ] = tr aik Bkn gi ⊗ gn
= Aik Bkn gi · gn = Aik Bki etc.
A.1.8 Scalar Product or Inner Product
A.1.13 Changing the Basis
transformation gi à gi ; gk à gk
α=A:B
¡ ¢
= Aik gi ⊗ gk : (B mn gm ⊗ gn ) = Aik B mn (gi · gm ) (gk · gn )
= Aik B mn gim gkn = Aik Bik ¡ ¢ ¡ ¢
gi = 1 · gi = gk ⊗ gk · gi = gk · gi gk = Aki gk

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
230 Appendix A. Formulary A.1. Formulary Tensor Algebra 231
¡ ¢
g i = A · gi with A = gk · gm gk ⊗ gm = Akm gk ⊗ gm etc.
¡ k ¢
gi = g · gi gk = Aki gk
B·B=1 or B·B=1
m i
g i = A · gi with k
A = (gk · gm ) g ⊗ g = Akm g ⊗ g m k m Bmi B k = δki etc. B m Bkm = δki
gi = (gk · gi ) gk = Aki gk Furthermore
k k
i i
¡ k
¢ i
¡ i
¢ k Ami Bmk = δik ; Bmk = A m ; Bmk = A m
g = 1 · g = g ⊗ g k · g = gk · g g = Bki gk

gi = B · gi with B = (gk · gm ) gk ⊗ gm = Bkm gk ⊗ gm A.1.14 Transformation of Vector Components


i
¡ i
¢ k i k
g = gk · g g = B k g
v = v i g i = v i gi = v i g i = v i g i
¡ ¢
gi = B · gi with B = gk · gm gk ⊗ gm = B km gk ⊗ gm The components of a vector transform with the following rules of transformation
¡ ¢
gi = gk · gi gk = B ki gk
vi = Aki vk = Aki vk ,
inverse relations gi à gi ; gk à gk v =i
Bki vk = B vk ki
,

¡ ¢ ¡ ¢ etc.
k
gi = 1 · g i = g k ⊗ g k · g i = g k · g i g k = A i g k
k
¡ ¢ vi = A i vk = Aki vk ,
k
gi = A · g i with A = g k · gm g k ⊗ g m = A m g k ⊗ g m i ki
¡ k ¢ vi = B k vk = B vk ,
k
gi = g · g i g k = A i g k
i.e. the coef£cients of the vector components transform while changing the coordinate systems
m k m k like the base vectors themselves.
gi = A · g i with A = (gm · gk ) g ⊗ g = Amk g ⊗ g
gi = (gk · gi ) gk = Aki gk
¡ ¢ ¡ ¢
A.1.15 Transformation Rules for Tensors
i
gi = 1 · gi = gk ⊗ gk · gi = gk · gi gk = B k gk
m T = T ik gi ⊗ gk = T ik gi ⊗ gk = Tik gi ⊗ gk = Ti k gi ⊗ gk
gi = B · gi with B = (gk · gm ) gk ⊗ gm = B k gk ⊗ gm
ik i k
¡ ¢ i = T gi ⊗ gk = T k gi ⊗ gk = T ik gi ⊗ gk = T i gi ⊗ gk
gi = gk · gi gk = B k gk
the transformation relations between base vectors imply
¡ ¢ km
gi = B · gi with B = gk · gm gk ⊗ gm = B gk ⊗ gm ik i k
¡ ¢ ki T = A m A n T mn ,
gi = gk · gi gk = B gk i i
T k = A m Ank T m
n ,
The following relations between the transformation tensors hold
T ik = Ami Ank Tmn ,
A·A=1 or A·A=1
k m i.e. the coef£cients of the tensor components transform like the tensor basis. The tensor basis
Ami A m = δik etc. A i Akm = δik transform like the base vectors.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
232 Appendix A. Formulary A.2. Formulary Tensor Analysis 233

A.1.16 Eigenvalues of a Tensor in Euclidean Space A.2 Formulary Tensor Analysis

EWP : (T − λ1) = 0 ; A.2.1 Derivatives of Vectos and Tensors


¡ ¢
T ik − λδki xi = 0 scalars, vectors and tensors as functions of a vector of position

conditions for non-trivial results α = α (x) ; v = v (x) ; T = T (x)

!
A vector £eld v = v (x) is differentiable in x, if a linear mapping L (x) exists, such that
det (T − λ1) = 0 ; ¡ ¢
¡ ¢ v (x + y) = v (x) + L (x) y + O y2 , if |y| → 0.
det T ik − λδki xi = 0
The mapping L (x) is called the gradient or Frechet derivative v 0 (x), also represented by the
characteristic polynomial operator
f (λ) = I3 − λI2 + λ2 I1 − λ3 = 0 L (x) = grad v (x) .
If T = TT ; Tki ∈ R, then the eigenvectors are orthogonal and the eigenvalues are real. invariants Analogous for a scalar valued vector function α (x)
of a tensor ¡ ¢
α (x + y) = α (x) + grad α (x) · y + O y2
I1 = λ1 + λ2 + λ3 = tr T = T ik rules
1£ ¤ grad (αβ) = α grad β + β grad α
I2 = λ 1 λ 2 + λ 2 λ 3 + λ 3 λ 1 = (tr T)2 − tr T2
2
1£ i k ¤ grad (v · w) = (grad v)T · w + (grad w)T · v
= T i T k − T ik T ki grad (αv) = v ⊗ grad α + α grad v
2 ¡ ¢
I3 = λ1 λ2 λ3 = det T = det T ik grad (v ⊗ w) = [(grad v) ⊗ w] · grad w
The gradient of a scalar valued vector function leads to a vector valued vector function. The
gradient of a vector valued vector function leads analogous to a tensor valued vector function.
divergence of a vector
div = tr (grad v) = grad v : 1
divergence of a tensor
¡ ¢ ¡ ¢
α · div T = div TT · α = grad TT · α : 1
rules
div (αv) = v grad α + α grad v
¡ ¢
div (T · v) = v · div TT + TT : grad v
div (grad v)T = grad (div v)

A.2.2 Derivatives of Base Vectors


Chirtoffel tensors
Γ(k) := grad (gk ) ; Γ(k) = Γikm gi ⊗ gm
components
Γikm = Γi(k)m = gi · Γ(k) gm
cartesian orthogonal coordinate systems the Christoffel tensors vanish.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
234 Appendix A. Formulary A.2. Formulary Tensor Analysis 235

A.2.3 Derivatives of Base Vectors in Components Notation A.2.6 Integral Theorems, Divergence Theorems
Z Z
u · nda = div udV
∂ (· · · )k ZA ZV
= (· · · )k,i with Γ(k) = grad (gk ) = gk,i ⊗ gi
∂θi ¡ s ¢ ui ni da = ui |i dV
gi,k = Γ(i) gk = Γil gs ⊗ gl gk = Γsik gs ; Z A
ZV
gi,k · gs = Γsik etc. i
g,k = −Γisk gs T · nda = div TdV
A
1 Z ZV
Γikl = gls Γsik etc. Γ = (gkl,i + gil,k − gik,l )
2 Ti k nk gi da = Ti k |k gi dV
ei,k = 0 A V
with n normal vector of the surface element
A surface
V volume
A.2.4 Components Notation of Vector Derivatives

∂v ∂ (v i gi )
grad v (x) = k
⊗ gk = ⊗ gk
∂θ ∂θk
∂v i ∂gi
= k gi ⊗ g k + v i k ⊗ g k
∂θ ∂θ
| {z }
Γ(i)
i
= v,k gi ⊗ gk + v i Γ(i)
i
= v,k gi ⊗ gk + v i gi,k ⊗ gk
¡ i ¢
grad v (x) = v,k + v s Γisk gi ⊗ gk = v i |k gi ⊗ gk
div v (x) = tr (grad v) = v,ii + v s Γisi = v i |i

A.2.5 Components Notation of Tensor Derivatives

∂T k ∂ (T ij gi ⊗ gj ) k
div T (x) = g = g
∂θk ∂θk µ ¶ µ ¶
∂gi ∂gj
= T,kij (gi ⊗ gj ) gk + T ij k
⊗ gj · gk + T ij gi ⊗ k · gk
∂θ ∂θ
¡ ¢
= T,kik gi + T ik Γsik gs + T ij gi ⊗ Γsjk gs · gk
¡ ¢
= T,kik + T mk Γimk + T ij Γkjk gi = T ik |k gi
¡ k ¢ i
= T i,k − T km Γm m k k
ik + T i Γkm g = T i |k g
i

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
236 Appendix A. Formulary

Appendix B

Nomenclature

Notation Description
α, β, γ, . . . scalar quantities in R
a, b, c, . . . column matrices or vectors in Rn
aT , bT , cT , . . . row matrices or vectors in Rn
A, B, C, . . . matrices in Rn ⊗ Rn
a, b, c, . . . vectors or £rst order tensors in E n
A,
3 B, 3 C,3 ... second order tensors in En ⊗ En
A, B, C, . . . third order tensors in En ⊗ En ⊗ En
A, B, C, . . . fourth order tensors in En ⊗ En ⊗ En ⊗ En

Notation Description
tr the trace operator of a tensor or a matrix
det the determinant operator of a tensor or a matrix
sym the symmetric part of a tensor or a matrix
skew the antisymmetric or skew part of a tensor or a matrix
dev the deviator part of a tensor or a matrix
grad = ∇ the gradient operator
div the divergence operator
rot the rotation operator
∆ the laplacian or the Laplace operator

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 237
238 Appendix B. Nomenclature

Bibliography

[1] Ralph Abraham, Jerrold E. Marsden, and Tudor Ratiu. Manifolds, Tensor Analysis and
Applications. Applied Mathematical Sciences. Springer-Verlag, Berlin, Heidelberg, New
York, second edition, 1988.
[2] Albrecht Beutelspacher. Lineare Algebra. Vieweg Verlag, Braunschweig, Wiesbaden,
1998.
Notation Description [3] Reint de Boer. Vektor- und Tensorrechnung für Ingenieure. Springer-Verlag, Berlin, Hei-
R the set of the real numbers delberg, New York, 1982.
R3 the set of real-valued triples
E3 the 3-dimensional Euclidean vector space [4] Gerd Fischer. Lineare Algebra. Vieweg Verlag, Braunschweig, Wiesbaden, 1997.
E3 ⊗ E 3 the space of second order tensors over the Euclidean vector space
[5] Jimmie Gilbert and Linda Gilbert. Linear Algebra and Matrix Theory. Academic Press,
{e1 , e2 , e3 } 3-dimensional Cartesian basis
San Diego, 1995.
{g1 , g2 , g3 } 3-dimensional arbitrary covariant basis
{g1 , g2 , g3 } 3-dimensional arbitrary contravariant basis [6] Paul R. Halmos. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics.
gij covariant metric coef£cients Springer-Verlag, Berlin, Heidelberg, New York, 1974.
g ij contravariant metric coef£cients
g = g ij gi ⊗ gj metric tensor [7] Hans Karl Iben. Tensorrechnung. Mathematik für Ingenieure und Naturwissenschaftler.
Teubner-Verlag, Stuttgart, Leipzig, 1999.
[8] Klaus Jänich. Lineare Algebra. Springer-Verlag, Berlin, Heidelberg, New York, 1998.
[9] Wilhelm Klingenberg. Lineare Algebra und Geometrie. Springer-Verlag, Berlin, Heidel-
berg, New York, second edition, 1992.
[10] Allan D. Kraus. Matrices for Engineers. Springer-Verlag, Berlin, Heidelberg, New York,
1987.
[11] Paul C. Matthews. Vector Calculus. Undergraduate Mathematics Series. Springer-Verlag,
Berlin, Heidelberg, New York, 1998.
[12] James G. Simmonds. A Brief on Tensor Analysis. Undergraduate Texts in Mathematics.
Springer-Verlag, Berlin, Heidelberg, New York, second edition, 1994.
[13] Erwin Stein. Unterlagen zur Vorlesung Mathematik V für konstr. Ingenieure – Matrizen-
und Tensorrechnung SS 94. Institut für Baumechanik und Numerische Mechanik, Univer-
sität Hannover, 1994.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 239
240 Bibliography

[14] Rudolf Zurmühl. Matrizen und ihre technischen Anwendungen. Springer-Verlag, Berlin,
Heidelberg, New York, fourth edition, 1964.

Glossary English – German

L1-norm - Integralnorm, L1-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19


L2-norm - L2-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
l1-norm - Summennorm, l1-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
l2-norm, Euclidian norm - l2-Norm, euklidische Norm . . . . . . . . . . . . . . . . . . . . . . . . . . 19
n-tuple - n-Tupel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
p-norm - Maximumsnorm, p-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

absolute norm - Gesamtnorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22


absolute value - Betrag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
absolute value of a tensor - Betrag eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . 111, 113
additive - additiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
additive identity - additionsneutrales Element . . . . . . . . . . . . . . . . . . . . . . . 10, 42
additive inverse - inverses Element der Addition . . . . . . . . . . . . . . . . . . . . 10, 42
af£ne vector - af£ner Vektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
af£ne vector space - af£ner Vektorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 54
antisymmetric - schiefsymmetrisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
antisymmetric matrix - scheifsymmetrische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 41
antisymmetric part - antisymmetrischer Anteil . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
antisymmetric part of a tensor - antisymmetrischer Anteil eines Tensors . . . . . . . . . . . . . . 115
area vector - Flächenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
associative - assoziativ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
associative rule - Assoziativgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
associative under matrix addition -
assoziativ bzgl. Matrizenaddition . . . . . . . . . . . . . . . . . . . . . 42

base vectors - Basisvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 87


basis - Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 241
242 Glossary English – German Glossary English – German 243

basis of the vector space - Basis eines Vektorraums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 complex conjugate eigenvalues - konjugiert komplexe Eigenwerte . . . . . . . . . . . . . . . . . . . . 124
bijective - bijektiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 complex numbers - komplexe Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
bilinear - bilinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 components - Komponenten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 65
bilinear form - Bilinearform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 components of the Christoffel symbol -
Komponenten des Christoffel-Symbols . . . . . . . . . . . . . . 142
binormal unit - Binormaleneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . 137
components of the permutation tensor -
binormal unit vector - Binormaleneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Komponenten des Permutationstensors . . . . . . . . . . . . . . . . 87
Cartesian base vectors - kartesische Basisvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 composition - Komposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34, 54, 106
Cartesian basis - kartesische Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 congruence transformation - Kongruenztransformation, kontragrediente Transformation
Cartesian components of a permutation tensor - 56, 63
kartesische Komponenten des Permutationstensor . . . . . . 87 congruent - kongruent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56, 63
Cartesian coordinates - kartesische Koordinaten . . . . . . . . . . . . . . . . . . . . . 78, 82, 144 continuum - Kontinuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Cauchy stress tensor - Cauchy-Spannungstensor . . . . . . . . . . . . . . . . . . . . . . . 96, 120 contravariant ε symbol - kontravariantes ε-Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Cauchy’s inequality - Schwarzsche oder Cauchy-Schwarzsche Ungleichung . . 21 contravariant base vectors - kontravariante Basisvektoren . . . . . . . . . . . . . . . . . . . . 81, 139
Cayley-Hamilton Theorem - Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 contravariant base vectors of the natural basis -
characteristic equation - charakteristische Gleichung . . . . . . . . . . . . . . . . . . . . . . . . . . 65 kontravariante Basisvektoren der natürlichen Basis . . . . 141
characteristic matrix - charakteristische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 contravariant coordinates - kontravariante Koordinaten, Koef£zienten . . . . . . . . . 80, 84
contravariant metric coef£cients -
characteristic polynomial - charakteristisches Polynom . . . . . . . . . . . . . . . . . . 56, 65, 123 kontravariante Metrikkoef£zienten . . . . . . . . . . . . . . . . 82, 83
Christoffel symbol - Christoffel-Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 coordinates - Koordinaten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
cofactor - Kofaktor, algebraisches Komplement . . . . . . . . . . . . . . . . . 51 covariant ε symbol - kovariantes ε-Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
column - Spalte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 covariant base vectors - kovariante Basisvektoren . . . . . . . . . . . . . . . . . . . . . . . . 80, 138
column index - Spaltenindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 covariant base vectors of the natural basis -
column matrix - Spaltenmatrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28, 40, 46 kovariante Basisvektoren der natürlichen Basis . . . . . . . 140
column vector - Spaltenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 40, 46, 59 covariant coordinates - kovariante Koordinaten, Koef£zienten . . . . . . . . . . . . . 81, 84
combination - Kombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 34, 54 covariant derivative - kovariante Ableitung . . . . . . . . . . . . . . . . . . . . . . . . . . 146, 149
commutative - kommutativ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 covariant metric coef£cients - kovariante Metrikkoef£zienten . . . . . . . . . . . . . . . . . . . . 80, 83
commutative matrix - kommutative Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 cross product - Kreuzprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87, 90, 96
commutative rule - Kommutativgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 curl - Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
commutative under matrix addition - curvature - Krümmung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
kommutativ bzgl. Matrizenaddition . . . . . . . . . . . . . . . . . . . 42 curvature of a curve - Krümmung einer Kurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
compatibility of vector and matrix norms -
curved surface - Raum¤äche, gekrümmte Ober¤äche . . . . . . . . . . . . . . . . . 138
Verträglichkeit von Vektor- und Matrix-Norm . . . . . . . . . . 22
curvilinear coordinate system - krummliniges Koordinatensystem . . . . . . . . . . . . . . . . . . . 139
compatible - verträglich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
curvilinear coordinates - krummlinige Koordinaten . . . . . . . . . . . . . . . . . . . . . . 139, 144
complete fourth order tensor - vollständiger Tensor vierter Stufe . . . . . . . . . . . . . . . . . . . . 129
complete second order tensor - vollständige Tensor zweiter Stufe . . . . . . . . . . . . . . . . . . . . . 99 de£nite metric - de£nite Metrik. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
complete third order tensor - vollständiger Tensor dritter Stufe . . . . . . . . . . . . . . . . . . . . 129 de£nite norm - de£nite Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
244 Glossary English – German Glossary English – German 245

deformation energy - Formänderungsenergie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 dot product - Punktprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85


deformation gradient - Deformationsgradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 dual space - Dualraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36, 97
derivative of a scalar - Ableitung einer skalaren Größe . . . . . . . . . . . . . . . . . . . . . 133 dual vector space - dualer Vektoraum, Dualraum . . . . . . . . . . . . . . . . . . . . . . . . . 36
derivative of a tensor - Ableitung eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 dummy index - stummer Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
derivative of a vector - Ableitung eines Vektors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 dyadic product - dyadisches Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94–96
derivative w.r.t. a scalar variable -
Ableitung nach einer skalaren Größe . . . . . . . . . . . . . . . . 133 eigenvalue - Eigenwert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 120
derivatives - Ableitungen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 eigenvalue problem - Eigenwertproblem . . . . . . . . . . . . . . . . . . . . . . 22, 65, 122, 123
derivatives of base vectors - Ableitungen von Basisvektoren . . . . . . . . . . . . . . . . . 141, 145 eigenvalues - Eigenwerte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 56

determinant - Determinante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50, 65, 89 eigenvector - Eigenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 120


determinant expansion by minors - eigenvector matrix - Eigenvektormatrix, Modalmatrix . . . . . . . . . . . . . . . . . . . . . 70
Determinantenentwicklungssatz mit Unterdeterminanten51 elastic - elastisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
determinant of a tensor - Determinante eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . 112 elasticity tensor - Elastizitätstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
determinant of the contravariant metric coef£cients - elasticity theory - Elastizitätstheorie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Determinante der kontravarianten Metrikkoef£zienten . . 83
determinant of the contravariant metric coef£cients - elements - Elemente . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Determinante der kovarianten Metrikkoef£zienten. . . . . .83 empty set - leere Menge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
determinant of the Jacobian matrix - equilibrium conditions - Gleichgewichtsbedingungen . . . . . . . . . . . . . . . . . . . . . . . . . 96
Determinante der Jacobimatrix . . . . . . . . . . . . . . . . . . . . . . 140
equilibrium conditon of moments -
deviator matrix - Deviatormatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Momentengleichgewichtsbedingung . . . . . . . . . . . . . . . . . 120
deviator part of a tensor - Deviator eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 equilibrium system of external forces -
diagonal matrix - Diagonalmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 43 Gleichgewicht der äußeren Kräfte . . . . . . . . . . . . . . . . . . . . 96
differential element of area - differentielles Flächenelement . . . . . . . . . . . . . . . . . . . . . . 139 equlibirum system of forces - Kräftegleichgewicht . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
dimension - Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13, 14 Euclidean norm - euklidische Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 30, 85
direct method - direkte Methode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Euclidean space - Euklidischer Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
direct product - direktes Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Euclidean vector - euklidische Vektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

directions of principal stress - Hauptspannungsrichtungen . . . . . . . . . . . . . . . . . . . . . . . . . 120 Euclidean vector space - euklidischer Vektorraum . . . . . . . . . . . . . . . . . . . . . 26, 29, 143
Euklidian matrix norm - Euklidische Matrixnorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
discrete metric - diskrete Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
even permutation - gerade Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
distance - Abstand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
exact differential - vollständiges Differential . . . . . . . . . . . . . . . . . . . . . . 133, 135
distributive - distributiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
distributive law - Distributivgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 £eld - Feld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
distributive w.r.t. addition - Distributivgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 £eld - Körper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
divergence of a tensor £eld - Divergenz eines Tensorfeldes . . . . . . . . . . . . . . . . . . . . . . . 147 £nite - endlich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
divergence of a vector £eld - Divergenz eines Vektorfeldes . . . . . . . . . . . . . . . . . . . . . . . 147 £nite element method - Finite-Element-Methode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
divergence theorem - Divergenztheorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 £rst order tensor - Tensor erster Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
domain - De£nitionsbereich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 fourth order tensor - Tensor vierter Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
246 Glossary English – German Glossary English – German 247

Frechet derivative - Frechet Ableitung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 inner product - inneres Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 85
free indices - freier Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 inner product of tensors - inneres Produkt von Tensoren . . . . . . . . . . . . . . . . . . . . . . . 110
function - Funktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 inner product space - innerer Produktraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 27
fundamental tensor - Fundamentaltensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .150 integers - ganze Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
integral theorem - Integralsatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Gauss transformation - Gaußsche Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
intersection - Schnittmenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Gauss’s theorem - Gauss’scher Integralsatz . . . . . . . . . . . . . . . . . . . . . . . 155, 158
invariance - Invarianz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
general eigenvalue problem - allgemeines Eigenwertproblem . . . . . . . . . . . . . . . . . . . . . . . 69
invariant - Invariante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120, 123, 148
general permutation symbol - allgemeines Permutationssymbol . . . . . . . . . . . . . . . . . . . . . 92
invariant - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
gradient - Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
inverse - Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8, 34
gradient of a vector of position - Gradient eines Ortsvektors . . . . . . . . . . . . . . . . . . . . . . . . . . 144
inverse of a matrix - inverse Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
higher order tensor - Tensor höherer Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 inverse of a tensor - inverser Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
homeomorphic - homöomorph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30, 97 inverse relation - inverse Beziehung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
homeomorphism - Homöomorphismus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 inverse transformation - inverse Transformation . . . . . . . . . . . . . . . . . . . . . . . . 101, 103
homogeneous - homogen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 inverse w.r.t. addition - inverses Element der Addition . . . . . . . . . . . . . . . . . . . . . . . 10
homogeneous linear equation system -
inverse w.r.t. multiplication - inverses Element der Multiplikation . . . . . . . . . . . . . . . . . . 10
homogenes lineares Gleichungssystem . . . . . . . . . . . . . . . . 65
inversion - Umkehrung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
homogeneous norm - homogene Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
invertible - invertierbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
homomorphism - Homomorphismus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Hooke’s law - Hookesche Gesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 isomorphic - isomorph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29, 35

Hölder sum inequality - Höldersche Ungleichung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 isomorphism - Isomorphimus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35


isotropic - isotrop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
identities for scalar products of tensors -
isotropic tensor - isotroper Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Rechenregeln für Skalarprodukte von Tensoren . . . . . . . 110
iterative process - Iterationsvorschrift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
identities for tensor products - Rechenregeln für Tensorprodukte . . . . . . . . . . . . . . . . . . . 106
identity element w.r.t. addition - neutrales Element der Addition . . . . . . . . . . . . . . . . . . . . . . . 10 Jacobian - Jacobi-Determinante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
identity element w.r.t. scalar multiplication -
neutrales Element der Multiplikation . . . . . . . . . . . . . . . . . . 10 Kronecker delta - Kronecker-Delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 79, 119
identity matrix - Einheitsmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 45 l-in£nity-norm, maximum-norm -
identity matrix - Einheitsmatrix, Identität . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Maximumnorm, ∞-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
identity tensor - Einheitstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Laplace operator - Laplace-Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
image set - Bildbereich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 laplacian of a scalar £eld - Laplace-Operator eines Skalarfeldes . . . . . . . . . . . . . . . . . 150
in£nitesimal - in£nitesimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 laplacian of a tensor £eld - Laplace-Operator eines Tensorfeldes . . . . . . . . . . . . . . . . . 151
in£nitesimal tetrahedron - in£nitesimaler Tetraeder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 laplacian of a vector £eld - Laplace-Operator eines Vektorfeldes . . . . . . . . . . . . . . . . . 150
injective - injektiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 left-hand Cauchy strain tensor - linker Cauchy-Strecktensor . . . . . . . . . . . . . . . . . . . . . . . . . 117
inner prodcut space - innerer Produktraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 line element - Linienelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
248 Glossary English – German Glossary English – German 249

linear - linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 metric space - metrischer Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17


linear algebra - lineare Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 metric tensor of covariant coef£cients -
Metriktensor mit kovarianten Koef£zienten . . . . . . . . . . . 102
linear combination - Linearkombination . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 49, 71
mixed components - gemischte Komponenten . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
linear dependence - lineare Abhängigkeit . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30, 62
mixed formulation of a second order tensor -
linear equation system - lineares Gleichungssytem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 gemischte Formulierung eines Tensors zweiter Stufe . . . 99
linear form - Linearform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 moment equilibrium conditon - Momentengleichgewichtsbedingung . . . . . . . . . . . . . . . . . 120
linear independence - lineare Unabhängigkeit . . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30 moving trihedron - begleitendes Dreibein . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
linear manifold - lineare Mannigfaltigkeit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 multiple roots - Mehrfachnullstellen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
linear mapping - lineare Abbildung . . . . . . . . . . . . . . . . . . . 32, 54, 97, 105, 121 multiplicative identity - multiplikationsneutrales Element . . . . . . . . . . . . . . . . . . . . . 45
linear operator - linearer Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 multiplicative inverse - inverses Element der Multiplikation . . . . . . . . . . . . . . . . . . 10
linear space - linearer Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
n-tuple - n-Tupel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
linear subspace - linearerUnterraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
nabla operator - Nabla-Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
linear transformation - lineare Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
natural basis - natürliche Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
linear vector space - linearer Vektorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
natural numbers - natürliche Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
linearity - Linearität . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 34
naturals - natürliche Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
linearly dependent - linear abhängig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 23, 49
negative de£nite - negativ de£nit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
linearly independent - linear unabhängig . . . . . . . . . . . . . . . . . . . . . 15, 23, 48, 59, 66
Newton’s relation - Vietaschen Wurzelsätze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
lowering an index - Senken eines Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
non empty set - nicht leere Menge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
main diagonal - Hauptdiagonale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 65 non-commutative - nicht-kommutativ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
map - Abbildung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 noncommutative - nicht kommutativ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69, 106
mapping - Abbildung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 nonsingular - regulär, nicht singulär . . . . . . . . . . . . . . . . . . . . . . . . 48, 59, 66
matrix - Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 nonsingular square matrix - reguläre quadratische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 55
matrix calculus - Matrizenalgebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 nonsymmetric - unsymmetrisch, nicht symmetrisch . . . . . . . . . . . . . . . . . . . 69
matrix multiplication - Matrizenmultiplikation . . . . . . . . . . . . . . . . . . . . . . . . . . . 42, 54 nontrivial solution - nicht triviale Lösung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
matrix norm - Matrix-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 22 norm - Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 65
matrix transpose - transponierte Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 norm of a tensor - norm eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
maximum absolute column sum norm - normal basis - normale Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Spaltennorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 normal unit - Normaleneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
maximum absolute row sum norm -
normal unit vector - Normaleneinheitsvektor . . . . . . . . . . . . . . . . . . . 121, 136, 138
Zeilennorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
normal vector - Normalenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 135
maximum-norm - Maximumsnorm, p-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
normed space - normierter Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
mean value - Mittelwert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
null mapping - Nullabbildung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
metric - Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
metric coef£cients - Metrikkoef£zienten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 odd permutation - ungerade Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
250 Glossary English – German Glossary English – German 251

one - Einselement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 principal axes problem - Hauptachsenproblem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65


operation - Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 principal axis - Hauptachse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
operation addition - Additionsoperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 principal stress directions - Hauptspannungsrichtungen . . . . . . . . . . . . . . . . . . . . . . . . . 122
operation multiplication - Multplikationsoperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 principal stresses - Hauptspannungen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
order of a matrix - Ordnung einer Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 product - Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
origin - Ursprung, Nullelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 proper orthogonal tensor - eigentlich orthogonaler Tensor . . . . . . . . . . . . . . . . . . . . . . 116
orthogonal - orthogonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
quadratic form - quadratische Form . . . . . . . . . . . . . . . . . . . . . . . 26, 57, 62, 124
orthogonal matrix - orthogonalen Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
quadratic value of the norm - Normquadrate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
orthogonal tensor - orthogonaler Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
orthogonal transformation - orthogonale Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 57 raising an index - Heben eines Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
orthonormal basis - orthonormale Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 range - Bildbereich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
outer product - äußeres Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 range - Urbild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
overlined basis - überstrichene Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 rank - Rang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
rational numbers - rationale Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
parallelepiped - Parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Rayleigh quotient - Rayleigh-Quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67, 68
partial derivatives - partielle Ableitungen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
real numbers - reelle Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
partial derivatives of base vectors -
partielle Ableitungen von Basisvektoren . . . . . . . . . . . . . 145 rectangular matrix - Rechteckmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
permutation symbol - Permutationssymbol . . . . . . . . . . . . . . . . . . . . . . . 87, 112, 128 reduction of rank - Rangabfall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
permutation tensor - Permutationstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Riesz representation theorem - Riesz Abbildungssatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
permutations - Permutationen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 right-hand Cauchy strain tensor -
rechter Cauchy-Strecktensor . . . . . . . . . . . . . . . . . . . . . . . . 117
point of origin - Koordinatenursprung, -nullpunkt . . . . . . . . . . . . . . . . . . . . . 28
roots - Nullstellen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Poisson’s ratio - Querkontraktionszahl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
rotated coordinate system - gedrehtes Koordiantensystem . . . . . . . . . . . . . . . . . . . . . . . 119
polar decomposition - polare Zerlegung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
rotation matrix - Drehmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
polynomial factorization - Polynomzerlegung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
rotation of a vector £eld - Rotation eines Vektorfeldes . . . . . . . . . . . . . . . . . . . . . . . . . 150
polynomial of n-th degree - Polynom n-ten Grades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
rotation transformation - Drehtransformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
position vector - Ortsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135, 152
rotator - Rotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
positive de£nite - positiv de£nit . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 61, 62, 111
row - Zeile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
positive metric - positive Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
row index - Zeilenindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
positive norm - positive Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
row matrix - Zeilenmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45
post-multiplication - Nachmultiplikation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
row vector - Zeilenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45
potential character - Potentialeigenschaft. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129
power series - Potenzreihe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 scalar £eld - Skalarfeld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
pre-multiplication - Vormultiplikation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 scalar function - Skalarfuntkion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
principal axes - Hauptachsen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 scalar invariant - skalare Invariante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
252 Glossary English – German Glossary English – German 253

scalar multiplication - skalare Multiplikation . . . . . . . . . . . . . . . . . . . . . . . . . 9, 12, 42 square - quadratisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40


scalar multiplication identity - multiplikationsneutrales Element . . . . . . . . . . . . . . . . . . . . . 10 square matrix - quadratische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
scalar product - Skalarprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 25, 85, 96 Stoke’s theorem - Stokescher Integralsatz, Integralsatz für ein Kreuzprodukt
scalar product of tensors - Skalarprodukt von Tensoren . . . . . . . . . . . . . . . . . . . . . . . . 110 157
scalar product of two dyads - Skalarprodukt zweier Dyadenprodukte . . . . . . . . . . . . . . . 111 strain tensor - Verzerrungstensor, Dehnungstensor . . . . . . . . . . . . . . . . . . 129
scalar triple product - Spatprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88, 90, 152 stress state - Spannungszustand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
scalar-valued function of multiple variables - stress tensor - Spannungstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 129
skalarwertige Funktion mehrerer Veränderlicher . . . . . . 134 stress vector - Spannungsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
scalar-valued scalar function - skalarwertige Skalarfunktion . . . . . . . . . . . . . . . . . . . . . . . . 133 subscript index - untenstehender Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
scalar-valued vector function - skalarwertige Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . 143 subset - Untermenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Schwarz inequality - Schwarzsche Ungleichung . . . . . . . . . . . . . . . . . . . . . . 26, 111 summation convention - Summenkonvention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
second derivative - zweite Ableitung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 superscript index - obenstehender Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
second order tensor - Tensor zweiter Stufe . . . . . . . . . . . . . . . . . . . . . . . . 96, 97, 127 superset - Obermenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
second order tensor product - Produkt von Tensoren zweiter Stufe . . . . . . . . . . . . . . . . . 105 supremum - obere Schranke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
section surface - Schnitt¤äche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 surface - Ober¤äche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
semide£nite - semide£nit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 surface element - Ober¤ächenelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Serret-Frenet equations - Frenetsche Formeln . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 surface integral - Ober¤ächenintegral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
set - Menge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 surjective - surjektiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
set theory - Mengenlehre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 symbols - Symbole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
shear stresses - Schubspannungen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 symmetric - symmetrisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 41
similar - ähnlich, kogredient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55, 69 symmetric matrix - symmetrische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
similarity transformation - Ähnlichkeitstransformation, kogrediente Transformation55 symmetric metric - symmetrische Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
simple fourth order tensor - einfacher Tensor vierter Stufe . . . . . . . . . . . . . . . . . . . . . . . 129 symmetric part - symmetrischer Anteil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
simple second order tensor - einfacher Tensor zweiter Stufe . . . . . . . . . . . . . . . . . . . . 94, 99 symmetric part of a tensor - symmetrischer Anteil eines Tensors . . . . . . . . . . . . . . . . . 115
simple third order tensor - einfacher Tensor dritter Stufe . . . . . . . . . . . . . . . . . . . . . . . 129 tangent unit - Tangenteneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
skew part of a tensor - schief- oder antisymmetrischer Anteil eines Tensors . . . 115 tangent unit vector - Tangenteneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
smmetric tensor - symmetrischer Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 tangent vector - Tangentenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
space - Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Taylor series - Taylor-Reihe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133, 153
space curve - Raumkurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 tensor - Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
space of continuous functions - Raum der stetige Funktionen . . . . . . . . . . . . . . . . . . . . . . . . . 14 tensor axioms - Axiome für Tensoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
space of square matrices - Raum der quadratischen Matrizen . . . . . . . . . . . . . . . . . . . . 14 tensor £eld - Tensorfeld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
span - Hülle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 tensor product - Tensorprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105, 106
special eigenvalue problem - spezielles Eigenwertproblem . . . . . . . . . . . . . . . . . . . . . . . . . 65 tensor product of two dyads - Tensorprodukt zweier Dyadenprodukte . . . . . . . . . . . . . . 106
spectral norm - Spektralnorm, Hilbert-Norm . . . . . . . . . . . . . . . . . . . . . . . . . 22 tensor space - Tensorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
254 Glossary English – German Glossary English – German 255

tensor with contravariant base vectors and covariant coordinates - vector - Vektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 28, 127
Tensor mit kontravarianten Basisvektoren und kovaranten
vector £eld - Vektorfeld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Koef£zienten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
tensor with covariant base vectors and contravariant coordinates - vector function - Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Tensor mit kovarianten Basisvektoren und kontravaranten vector norm - Vektor-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 22
Koef£zienten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
vector of associated direction - Richtungsvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
tensor-valued function of multiple variables -
tensorwertige Funktion mehrerer Veränderlicher . . . . . . 134 vector of position - Ortsvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
tensor-valued scalar function - tensorwertige Skalarfunktion . . . . . . . . . . . . . . . . . . . . . . . 133 vector product - Vektorprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
tensor-valued vector function - tensorwertige Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . 143 vector space - Vektorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 49
third order fundamental tensor - Fundamentaltensor dritter Stufe . . . . . . . . . . . . . . . . . . . . . 128 vector space of linear mappings - Vektorraum der linearen Abbildungen . . . . . . . . . . . . . . . . . 33
third order tensor - Tensor dritter Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 vector-valued function - vektorwertige Funktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
topology - Topologie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 vector-valued function of multiple variables -
vektorwertige Funktion mehrerer Veränderlicher . . . . . . 134
torsion of a curve - Torsion einer Kurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
vector-valued scalar function - vektorwertige Skalarfunktion . . . . . . . . . . . . . . . . . . . . . . . 133
total differential - vollständiges Differential . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
vector-valued vector function - vektorwertige Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . 143
trace of a matrix - Spur einer Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
visual space - Anschauungsraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
trace of a tensor - Spur eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
volume - Volumen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
transformation matrix - Transformationsmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
volume element - Volumenelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
transformation of base vectors - Transformation der Basisvektoren . . . . . . . . . . . . . . . . . . . 101
transformation of the metric coef£cients - volume integral - Volumenintegral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Transformation der Metrikkoef£zienten . . . . . . . . . . . . . . . 84 volumetric matrix - Kugelmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
transformation relations - Transforamtionsformeln . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 volumetric part of a tensor - Kugelanteil eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
transformation tensor - Transformationstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 von Mises iteration - von Mises Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
transformed contravariant base vector -
transformierte kontravarianter Basisvektor . . . . . . . . . . . 103 whole numbers - natürliche Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
transformed covariant base vector -
transformierte kovarianter Basisvektor . . . . . . . . . . . . . . . 103 Young’s modulus - Elastizitätsmodul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
transpose of a matrix - transponierte Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
zero element - Nullelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
transpose of a matrix product - transponiertes Matrizenprodukt . . . . . . . . . . . . . . . . . . . . . . 44
zero vector - Nullvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
transpose of a tensor - Transponierter Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
zeros - Nullstellen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
triangle inequality - Dreiecksungleichung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16, 19
trivial solution - triviale Lösung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

union - Vereinigungsmenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
unit matrix - Einheitsmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
unitary space - unitärer Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
unitary vector space - unitärer Vektorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
usual scalar product - übliches Skalarprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
256 Glossary English – German

Glossary German – English

L2-Norm - L2-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
l2-Norm, euklidische Norm - l2-norm, Euclidian norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
n-Tupel - n-tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Abbildung - map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Abbildung - mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Ableitung einer skalaren Größe - derivative of a scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Ableitung eines Tensors - derivative of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Ableitung eines Vektors - derivative of a vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Ableitung nach einer skalaren Größe -
derivative w.r.t. a scalar variable . . . . . . . . . . . . . . . . . . . . . 133
Ableitungen - derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Ableitungen von Basisvektoren - derivatives of base vectors . . . . . . . . . . . . . . . . . . . . . 141, 145
Abstand - distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
additionsneutrales Element - additive identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10, 42
Additionsoperation - operation addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
additiv - additive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
ähnlich, kogredient - similar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55, 69
Ähnlichkeitstransformation, kogrediente Transformation -
similarity transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
äußeres Produkt - outer product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
af£ner Vektor - af£ne vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
af£ner Vektorraum - af£ne vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 54
allgemeines Eigenwertproblem - general eigenvalue problem . . . . . . . . . . . . . . . . . . . . . . . . . . 69
allgemeines Permutationssymbol -
general permutation symbol . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Anschauungsraum - visual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 257
258 Glossary German – English Glossary German – English 259

antisymmetrischer Anteil - antisymmetric part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Determinante der kontravarianten Metrikkoef£zienten -


antisymmetrischer Anteil eines Tensors - determinant of the contravariant metric coef£cients . . . . . 83
Determinante der kovarianten Metrikkoef£zienten -
antisymmetric part of a tensor . . . . . . . . . . . . . . . . . . . . . . . 115
determinant of the contravariant metric coef£cients . . . . . 83
assoziativ - associative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
assoziativ bzgl. Matrizenaddition - Determinante eines Tensors - determinant of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Determinantenentwicklungssatz mit Unterdeterminanten -
associative under matrix addition . . . . . . . . . . . . . . . . . . . . . 42
determinant expansion by minors . . . . . . . . . . . . . . . . . . . . . 51
Assoziativgesetz - associative rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Deviator eines Tensors - deviator part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Axiome für Tensoren - tensor axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Deviatormatrix - deviator matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Basis - basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Diagonalmatrix - diagonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 43
Basis eines Vektorraums - basis of the vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 differentielles Flächenelement - differential element of area . . . . . . . . . . . . . . . . . . . . . . . . . 139
Basisvektoren - base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 87 Dimension - dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13, 14
begleitendes Dreibein - moving trihedron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 direkte Methode - direct method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68
Betrag - absolute value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 direktes Produkt - direct product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Betrag eines Tensors - absolute value of a tensor . . . . . . . . . . . . . . . . . . . . . . 111, 113 diskrete Metrik - discrete metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
bijektiv - bijective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 distributiv - distributive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Bildbereich - image set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Distributivgesetz - distributive law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Bildbereich - range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Distributivgesetz - distributive w.r.t. addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
bilinear - bilinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Divergenz eines Tensorfeldes - divergence of a tensor £eld . . . . . . . . . . . . . . . . . . . . . . . . . 147
Bilinearform - bilinear form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Divergenz eines Vektorfeldes - divergence of a vector £eld . . . . . . . . . . . . . . . . . . . . . . . . . 147
Binormaleneinheitsvektor - binormal unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Divergenztheorem - divergence theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Binormaleneinheitsvektor - binormal unit vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Drehmatrix - rotation matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Drehtransformation - rotation transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Cauchy-Spannungstensor - Cauchy stress tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 120
Dreiecksungleichung - triangle inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16, 19
Cayley-Hamilton Theorem - Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
dualer Vektoraum, Dualraum - dual vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
charakteristische Gleichung - characteristic equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Dualraum - dual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36, 97
charakteristische Matrix - characteristic matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
dyadisches Produkt - dyadic product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94–96
charakteristisches Polynom - characteristic polynomial . . . . . . . . . . . . . . . . . . . . 56, 65, 123
Christoffel-Symbol - Christoffel symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 eigentlich orthogonaler Tensor - proper orthogonal tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Eigenvektor - eigenvector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 120
de£nite Metrik - de£nite metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Eigenvektormatrix, Modalmatrix -
de£nite Norm - de£nite norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 eigenvector matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
De£nitionsbereich - domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Eigenwert - eigenvalue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 120
Deformationsgradient - deformation gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Eigenwerte - eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 56
Determinante - determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50, 65, 89 Eigenwertproblem - eigenvalue problem . . . . . . . . . . . . . . . . . . . . . 22, 65, 122, 123
Determinante der Jacobimatrix - determinant of the Jacobian matrix . . . . . . . . . . . . . . . . . . 140 einfacher Tensor dritter Stufe - simple third order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
260 Glossary German – English Glossary German – English 261

einfacher Tensor vierter Stufe - simple fourth order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 129 gemischte Formulierung eines Tensors zweiter Stufe -
mixed formulation of a second order tensor . . . . . . . . . . . . 99
einfacher Tensor zweiter Stufe - simple second order tensor . . . . . . . . . . . . . . . . . . . . . . . 94, 99
gemischte Komponenten - mixed components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Einheitsmatrix - identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 45
gerade Permutation - even permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Einheitsmatrix - unit matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Gesamtnorm - absolute norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Einheitsmatrix, Identität - identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Gleichgewicht der äußeren Kräfte -
Einheitstensor - identity tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 equilibrium system of external forces . . . . . . . . . . . . . . . . . 96
Einselement - one . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Gleichgewichtsbedingungen - equilibrium conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
elastisch - elastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Gradient - gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Elastizitätsmodul - Young’s modulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Gradient eines Ortsvektors - gradient of a vector of position . . . . . . . . . . . . . . . . . . . . . . 144
Elastizitätstensor - elasticity tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Hauptachse - principal axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Elastizitätstheorie - elasticity theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Hauptachsen - principal axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Elemente - elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Hauptachsenproblem - principal axes problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
endlich - £nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Hauptdiagonale - main diagonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 65
Euklidische Matrixnorm - Euklidian matrix norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Hauptspannungen - principal stresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
euklidische Norm - Euclidean norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 30, 85
Hauptspannungsrichtungen - directions of principal stress . . . . . . . . . . . . . . . . . . . . . . . . 120
euklidische Vektoren - Euclidean vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Hauptspannungsrichtungen - principal stress directions . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Euklidischer Raum - Euclidean space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Heben eines Index - raising an index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
euklidischer Vektorraum - Euclidean vector space . . . . . . . . . . . . . . . . . . . . . . 26, 29, 143
homogen - homogeneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Feld - £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 homogene Norm - homogeneous norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Finite-Element-Methode - £nite element method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 homogenes lineares Gleichungssystem -
homogeneous linear equation system . . . . . . . . . . . . . . . . . 65
Flächenvektor - area vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Homomorphismus - homomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Formänderungsenergie - deformation energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
homöomorph - homeomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30, 97
Frechet Ableitung - Frechet derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Homöomorphismus - homeomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
freier Index - free indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Hookesche Gesetz - Hooke’s law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Frenetsche Formeln - Serret-Frenet equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Höldersche Ungleichung - Hölder sum inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Fundamentaltensor - fundamental tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Fundamentaltensor dritter Stufe - Hülle - span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
third order fundamental tensor . . . . . . . . . . . . . . . . . . . . . . 128
in£nitesimal - in£nitesimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Funktion - function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
in£nitesimaler Tetraeder - in£nitesimal tetrahedron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
ganze Zahlen - integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 injektiv - injective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Gauss’scher Integralsatz - Gauss’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155, 158 innerer Produktraum - inner prodcut space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Gaußsche Transformation - Gauss transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 innerer Produktraum - inner product space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 27
gedrehtes Koordiantensystem - rotated coordinate system . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 inneres Produkt - inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 85

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
262 Glossary German – English Glossary German – English 263

inneres Produkt von Tensoren - inner product of tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 kommutative Matrix - commutative matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Integralnorm, L1-Norm - L1-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Kommutativgesetz - commutative rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Integralsatz - integral theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 komplexe Zahlen - complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
invariant - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Komponenten - components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 65
Invariante - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120, 123, 148 Komponenten des Christoffel-Symbols -
components of the Christoffel symbol . . . . . . . . . . . . . . . . 142
Invarianz - invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Komponenten des Permutationstensors -
Inverse - inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8, 34 components of the permutation tensor . . . . . . . . . . . . . . . . . 87
inverse Beziehung - inverse relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Komposition - composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34, 54, 106
inverse Matrix - inverse of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 kongruent - congruent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56, 63
inverse Transformation - inverse transformation . . . . . . . . . . . . . . . . . . . . . . . . . 101, 103 Kongruenztransformation, kontragrediente Transformation -
congruence transformation . . . . . . . . . . . . . . . . . . . . . . . 56, 63
inverser Tensor - inverse of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
konjugiert komplexe Eigenwerte -
inverses Element der Addition - additive inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10, 42 complex conjugate eigenvalues . . . . . . . . . . . . . . . . . . . . . . 124
inverses Element der Addition - inverse w.r.t. addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Kontinuum - continuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
inverses Element der Multiplikation -
inverse w.r.t. multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . 10 kontravariante Basisvektoren - contravariant base vectors . . . . . . . . . . . . . . . . . . . . . . . 81, 139
kontravariante Basisvektoren der natürlichen Basis -
inverses Element der Multiplikation -
contravariant base vectors of the natural basis . . . . . . . . . 141
multiplicative inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
kontravariante Koordinaten, Koef£zienten -
invertierbar - invertible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 contravariant coordinates . . . . . . . . . . . . . . . . . . . . . . . . . 80, 84
isomorph - isomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29, 35 kontravariante Metrikkoef£zienten -
contravariant metric coef£cients . . . . . . . . . . . . . . . . . . 82, 83
Isomorphimus - isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
kontravariantes ε-Symbol - contravariant ε symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
isotrop - isotropic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Koordinaten - coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
isotroper Tensor - isotropic tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Koordinatenursprung, -nullpunkt -
Iterationsvorschrift - iterative process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 point of origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
kovariante Ableitung - covariant derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146, 149
Jacobi-Determinante - Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
kovariante Basisvektoren - covariant base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 80, 138
kartesische Basis - Cartesian basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 kovariante Basisvektoren der natürlichen Basis -
kartesische Basisvektoren - Cartesian base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 covariant base vectors of the natural basis . . . . . . . . . . . . 140
kovariante Koordinaten, Koef£zienten -
kartesische Komponenten des Permutationstensor -
covariant coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81, 84
Cartesian components of a permutation tensor . . . . . . . . . 87
kovariante Metrikkoef£zienten - covariant metric coef£cients . . . . . . . . . . . . . . . . . . . . . . 80, 83
kartesische Koordinaten - Cartesian coordinates . . . . . . . . . . . . . . . . . . . . . . . 78, 82, 144
Kofaktor, algebraisches Komplement - kovariantes ε-Symbol - covariant ε symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
cofactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Kreuzprodukt - cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87, 90, 96
Kombination - combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 34, 54 Kronecker-Delta - Kronecker delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 79, 119
kommutativ - commutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 krummlinige Koordinaten - curvilinear coordinates. . . . . . . . . . . . . . . . . . . . . . . . .139, 144
kommutativ bzgl. Matrizenaddition - krummliniges Koordinatensystem -
commutative under matrix addition . . . . . . . . . . . . . . . . . . . 42 curvilinear coordinate system . . . . . . . . . . . . . . . . . . . . . . . 139

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
264 Glossary German – English Glossary German – English 265

Kräftegleichgewicht - equlibirum system of forces . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Matrix-Norm - matrix norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 22


Krümmung - curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Matrizenalgebra - matrix calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Krümmung einer Kurve - curvature of a curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Matrizenmultiplikation - matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42, 54
Kugelanteil eines Tensors - volumetric part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Maximumnorm, ∞-Norm - l-in£nity-norm, maximum-norm . . . . . . . . . . . . . . . . . . . . . . 19
Kugelmatrix - volumetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Maximumsnorm, p-Norm - p-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Körper - £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Maximumsnorm, p-Norm - maximum-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Mehrfachnullstellen - multiple roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Laplace-Operator - Laplace operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Menge - set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Laplace-Operator eines Skalarfeldes -
laplacian of a scalar £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Mengenlehre - set theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Laplace-Operator eines Tensorfeldes - Metrik - metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
laplacian of a tensor £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Metrikkoef£zienten - metric coef£cients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Laplace-Operator eines Vektorfeldes - Metriktensor mit kovarianten Koef£zienten -
laplacian of a vector £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 metric tensor of covariant coef£cients . . . . . . . . . . . . . . . . 102
leere Menge - empty set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 metrischer Raum - metric space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
linear - linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 von Mises Iteration - von Mises iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
linear abhängig - linearly dependent . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 23, 49 Mittelwert - mean value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
linear unabhängig - linearly independent . . . . . . . . . . . . . . . . . . . 15, 23, 48, 59, 66 Momentengleichgewichtsbedingung -
equilibrium conditon of moments . . . . . . . . . . . . . . . . . . . 120
lineare Abbildung - linear mapping . . . . . . . . . . . . . . . . . . . . . 32, 54, 97, 105, 121
Momentengleichgewichtsbedingung -
lineare Abhängigkeit - linear dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30, 62 moment equilibrium conditon . . . . . . . . . . . . . . . . . . . . . . . 120
lineare Algebra - linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 multiplikationsneutrales Element -
lineare Mannigfaltigkeit - linear manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 multiplicative identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
multiplikationsneutrales Element -
lineare Transformation - linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
scalar multiplication identity . . . . . . . . . . . . . . . . . . . . . . . . . 10
lineare Unabhängigkeit - linear independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30
Multplikationsoperation - operation multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
linearer Operator - linear operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
linearer Raum - linear space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 n-Tupel - n-tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
linearer Vektorraum - linear vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Nabla-Operator - nabla operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
linearerUnterraum - linear subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Nachmultiplikation - post-multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
lineares Gleichungssytem - linear equation system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 natürliche Basis - natural basis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140
Linearform - linear form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 natürliche Zahlen - natural numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Linearität - linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 34 natürliche Zahlen - naturals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Linearkombination - linear combination . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 49, 71 natürliche Zahlen - whole numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Linienelement - line element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 negativ de£nit - negative de£nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
linker Cauchy-Strecktensor - left-hand Cauchy strain tensor . . . . . . . . . . . . . . . . . . . . . . 117 neutrales Element der Addition - identity element w.r.t. addition . . . . . . . . . . . . . . . . . . . . . . . 10
neutrales Element der Multiplikation -
Matrix - matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 identity element w.r.t. scalar multiplication . . . . . . . . . . . . 10

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
266 Glossary German – English Glossary German – English 267

nicht kommutativ - noncommutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69, 106 partielle Ableitungen - partial derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
nicht leere Menge - non empty set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 partielle Ableitungen von Basisvektoren -
partial derivatives of base vectors . . . . . . . . . . . . . . . . . . . . 145
nicht triviale Lösung - nontrivial solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Permutationen - permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
nicht-kommutativ - non-commutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Permutationssymbol - permutation symbol . . . . . . . . . . . . . . . . . . . . . . . . 87, 112, 128
Norm - norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 65
Permutationstensor - permutation tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
norm eines Tensors - norm of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
polare Zerlegung - polar decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
normale Basis - normal basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Polynom n-ten Grades - polynomial of n-th degree . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Normaleneinheitsvektor - normal unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Polynomzerlegung - polynomial factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Normaleneinheitsvektor - normal unit vector . . . . . . . . . . . . . . . . . . . . . . . . 121, 136, 138
positiv de£nit - positive de£nite . . . . . . . . . . . . . . . . . . . . . . . . . 25, 61, 62, 111
Normalenvektor - normal vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 135
positive Metrik - positive metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
normierter Raum - normed space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
positive Norm - positive norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Normquadrate - quadratic value of the norm . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Potentialeigenschaft - potential character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Nullabbildung - null mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Potenzreihe - power series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Nullelement - zero element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Produkt - product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Nullstellen - roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Produkt von Tensoren zweiter Stufe -
Nullstellen - zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 second order tensor product . . . . . . . . . . . . . . . . . . . . . . . . . 105
Nullvektor - zero vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Punktprodukt - dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
obenstehender Index - superscript index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 quadratisch - square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
obere Schranke - supremum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 quadratische Form - quadratic form . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 57, 62, 124
Ober¤äche - surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 quadratische Matrix - square matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Ober¤ächenelement - surface element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Querkontraktionszahl - Poisson’s ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Ober¤ächenintegral - surface integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Rang - rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Obermenge - superset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Rangabfall - reduction of rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Operation - operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
rationale Zahlen - rational numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Ordnung einer Matrix - order of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Raum - space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
orthogonal - orthogonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Raum der quadratischen Matrizen -
orthogonale Transformation - orthogonal transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 space of square matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
orthogonalen Matrix - orthogonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Raum der stetige Funktionen - space of continuous functions . . . . . . . . . . . . . . . . . . . . . . . . 14
orthogonaler Tensor - orthogonal tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Raum¤äche, gekrümmte Ober¤äche -
orthonormale Basis - orthonormal basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 curved surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Ortsvektor - position vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135, 152 Raumkurve - space curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Ortsvektoren - vector of position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Rayleigh-Quotient - Rayleigh quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67, 68
Rechenregeln für Skalarprodukte von Tensoren -
Parallelepiped - parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 identities for scalar products of tensors . . . . . . . . . . . . . . . 110

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
268 Glossary German – English Glossary German – English 269

Rechenregeln für Tensorprodukte - skalarwertige Vektorfunktion - scalar-valued vector function . . . . . . . . . . . . . . . . . . . . . . . . 143


identities for tensor products . . . . . . . . . . . . . . . . . . . . . . . . 106
Spalte - column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Rechteckmatrix - rectangular matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Spaltenindex - column index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
rechter Cauchy-Strecktensor - right-hand Cauchy strain tensor . . . . . . . . . . . . . . . . . . . . . 117
Spaltenmatrix - column matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 40, 46
reelle Zahlen - real numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Spaltennorm - maximum absolute column sum norm . . . . . . . . . . . . . . . . . 22
regulär, nicht singulär - nonsingular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48, 59, 66
Spaltenvektor - column vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 40, 46, 59
reguläre quadratische Matrix - nonsingular square matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Spannungstensor - stress tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 129
Richtungsvektoren - vector of associated direction . . . . . . . . . . . . . . . . . . . . . . . 120
Spannungsvektor - stress vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Riesz Abbildungssatz - Riesz representation theorem . . . . . . . . . . . . . . . . . . . . . . . . . 36
Spannungszustand - stress state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Rotation - curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Spatprodukt - scalar triple product . . . . . . . . . . . . . . . . . . . . . . . . . 88, 90, 152
Rotation eines Vektorfeldes - rotation of a vector £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Spektralnorm, Hilbert-Norm - spectral norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Rotor - rotator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
spezielles Eigenwertproblem - special eigenvalue problem . . . . . . . . . . . . . . . . . . . . . . . . . . 65
scheifsymmetrische Matrix - antisymmetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Spur einer Matrix - trace of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
schief- oder antisymmetrischer Anteil eines Tensors - Spur eines Tensors - trace of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
skew part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Stokescher Integralsatz, Integralsatz für ein Kreuzprodukt -
schiefsymmetrisch - antisymmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Stoke’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Schnitt¤äche - section surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 stummer Index - dummy index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Schnittmenge - intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Summenkonvention - summation convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Schubspannungen - shear stresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Summennorm, l1-Norm - l1-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Schwarzsche oder Cauchy-Schwarzsche Ungleichung - surjektiv - surjective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Cauchy’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Symbole - symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Schwarzsche Ungleichung - Schwarz inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 111 symmetrisch - symmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 41
semide£nit - semide£nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 symmetrische Matrix - symmetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Senken eines Index - lowering an index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 symmetrische Metrik - symmetric metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
skalare Invariante - scalar invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 symmetrischer Anteil - symmetric part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
skalare Multiplikation - scalar multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 12, 42 symmetrischer Anteil eines Tensors -
Skalarfeld - scalar £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 symmetric part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Skalarfuntkion - scalar function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 symmetrischer Tensor - smmetric tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Skalarprodukt - scalar product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 25, 85, 96 Tangenteneinheitsvektor - tangent unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Skalarprodukt von Tensoren - scalar product of tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Tangenteneinheitsvektor - tangent unit vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Skalarprodukt zweier Dyadenprodukte -
Tangentenvektor - tangent vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
scalar product of two dyads . . . . . . . . . . . . . . . . . . . . . . . . . 111
skalarwertige Funktion mehrerer Veränderlicher - Taylor-Reihe - Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 153
scalar-valued function of multiple variables . . . . . . . . . . 134 Tensor - tensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
skalarwertige Skalarfunktion - scalar-valued scalar function . . . . . . . . . . . . . . . . . . . . . . . . 133 Tensor dritter Stufe - third order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
270 Glossary German – English Glossary German – English 271

Tensor erster Stufe - £rst order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 überstrichene Basis - overlined basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Tensor höherer Stufe - higher order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 übliches Skalarprodukt - usual scalar product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Tensor mit kontravarianten Basisvektoren und kovaranten Koef£zienten - Umkehrung - inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
tensor with contravariant base vectors and covariant coordi-
ungerade Permutation - odd permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
nates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Tensor mit kovarianten Basisvektoren und kontravaranten Koef£zienten - unitärer Raum - unitary space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
tensor with covariant base vectors and contravariant coordi- unitärer Vektorraum - unitary vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
nates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 unsymmetrisch, nicht symmetrisch -
Tensor vierter Stufe - fourth order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 nonsymmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Tensor zweiter Stufe - second order tensor . . . . . . . . . . . . . . . . . . . . . . . . . 96, 97, 127 untenstehender Index - subscript index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Tensorfeld - tensor £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Untermenge - subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Tensorprodukt - tensor product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105, 106 Urbild - range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Tensorprodukt zweier Dyadenprodukte - Ursprung, Nullelement - origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
tensor product of two dyads . . . . . . . . . . . . . . . . . . . . . . . . . 106
Tensorraum - tensor space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Vektor - vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 28, 127
tensorwertige Funktion mehrerer Veränderlicher - Vektor-Norm - vector norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 22
tensor-valued function of multiple variables . . . . . . . . . . 134 Vektorfeld - vector £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
tensorwertige Skalarfunktion - tensor-valued scalar function . . . . . . . . . . . . . . . . . . . . . . . . 133 Vektorfunktion - vector function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
tensorwertige Vektorfunktion - tensor-valued vector function . . . . . . . . . . . . . . . . . . . . . . . 143 Vektorprodukt - vector product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Topologie - topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Vektorraum - vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 49
Torsion einer Kurve - torsion of a curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Vektorraum der linearen Abbildungen -
Transforamtionsformeln - transformation relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 vector space of linear mappings . . . . . . . . . . . . . . . . . . . . . . 33
Transformation der Basisvektoren - vektorwertige Funktion - vector-valued function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
transformation of base vectors . . . . . . . . . . . . . . . . . . . . . . 101 vektorwertige Funktion mehrerer Veränderlicher -
Transformation der Metrikkoef£zienten - vector-valued function of multiple variables . . . . . . . . . . 134
transformation of the metric coef£cients . . . . . . . . . . . . . . . 84
vektorwertige Skalarfunktion - vector-valued scalar function . . . . . . . . . . . . . . . . . . . . . . . . 133
Transformationsmatrix - transformation matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
vektorwertige Vektorfunktion - vector-valued vector function . . . . . . . . . . . . . . . . . . . . . . . 143
Transformationstensor - transformation tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Vereinigungsmenge - union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
transformierte kontravarianter Basisvektor -
transformed contravariant base vector . . . . . . . . . . . . . . . . 103 verträglich - compatible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
transformierte kovarianter Basisvektor - Verträglichkeit von Vektor- und Matrix-Norm -
transformed covariant base vector . . . . . . . . . . . . . . . . . . . 103 compatibility of vector and matrix norms . . . . . . . . . . . . . . 22
transponierte Matrix - matrix transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Verzerrungstensor, Dehnungstensor -
strain tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
transponierte Matrix - transpose of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Vietaschen Wurzelsätze - Newton’s relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Transponierter Tensor - transpose of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
vollständige Tensor zweiter Stufe -
transponiertes Matrizenprodukt - complete second order tensor . . . . . . . . . . . . . . . . . . . . . . . . 99
transpose of a matrix product . . . . . . . . . . . . . . . . . . . . . . . . 44
vollständiger Tensor dritter Stufe -
triviale Lösung - trivial solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 complete third order tensor . . . . . . . . . . . . . . . . . . . . . . . . . 129

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
272 Glossary German – English

vollständiger Tensor vierter Stufe -


complete fourth order tensor . . . . . . . . . . . . . . . . . . . . . . . . 129
vollständiges Differential - exact differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 135
vollständiges Differential
Volumen
- total differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
- volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Index
Volumenelement - volume element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Volumenintegral - volume integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 L1-norm, 19 Cartesian basis, 149
Vormultiplikation - pre-multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 L2-norm, 19 Cartesian components of a permutation ten-
l1-norm, 19 sor, 87
Zeile - row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 l2-norm, Euclidian norm, 19 Cartesian coordinates, 78, 82, 144
Zeilenindex - row index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 n-tuple, 28 Cauchy, 96
Zeilenmatrix - row matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45 p-norm, 19 Cauchy stress tensor, 96, 120
Cauchy’s inequality, 21
Zeilennorm - maximum absolute row sum norm . . . . . . . . . . . . . . . . . . . . 22 absolute norm, 22 Cayley-Hamilton Theorem, 71
Zeilenvektor - row vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45 absolute value, 85 characteristic equation, 65
zweite Ableitung - second derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 absolute value of a tensor, 111, 113 characteristic matrix, 70
addition, 12, 13 characteristic polynomial, 56, 65, 123
additive, 32 Christoffel symbol, 142
additive identity, 10, 12, 42 cofactor, 51
additive inverse, 10, 12, 42 column, 40
af£ne vector, 28 column index, 40
af£ne vector space, 28, 54 column matrix, 28, 40, 46
antisymmetric, 41 column vector, 28, 40, 46, 59
antisymmetric matrix, 41 combination, 9, 34, 54
antisymmetric part, 44 commutative, 43
antisymmetric part of a tensor, 115 commutative matrix, 43
area vector, 152
commutative rule, 10, 12
associative, 42
commutative under matrix addition, 42
associative rule, 10, 12
compatibility of vector and matrix norms, 22
associative under matrix addition, 42
compatible, 22
base vectors, 31, 87 complete fourth order tensor, 129
basis, 31 complete second order tensor, 99
basis of the vector space, 15 complete third order tensor, 129
bijective, 8 complex conjugate eigenvalues, 124
bilinear, 25 complex numbers, 7
bilinear form, 26 components, 31, 65
binormal unit, 137 components of the Christoffel symbol, 142
binormal unit vector, 137 components of the permutation tensor, 87
composition, 34, 54, 106
Cantor, 6 congruence transformation, 56, 63
Cartesian base vectors, 88 congruent, 56, 63

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 273
274 Index Index 275

continuum, 96 dimension, 13, 14 £eld, 10, 12, 143 integers, 7


contravariant ε symbol, 93 direct method, 68 £nite, 13 integral theorem, 156
contravariant base vectors, 81, 139 direct product, 94 £nite element method, 57 intersection, 7
contravariant base vectors of the natural ba- directions of principal stress, 120 £rst order tensor, 127 invariance, 57
sis, 141 discrete metric, 17 fourth order tensor, 129 invariant, 57, 120, 123, 148
contravariant coordinates, 80, 84 distance, 17 Frechet derivative, 143 inverse, 8, 34
contravariant metric coef£cients, 82, 83 distributive, 42 free indices, 78 inverse of a matrix, 48
coordinates, 31 distributive law, 10, 13 function, 8 inverse of a tensor, 115
covariant ε symbol, 92 distributive w.r.t. addition, 10 fundamental tensor, 150 inverse relation, 103
covariant base vectors, 80, 138 distributive w.r.t. scalar addition, 13 inverse transformation, 101, 103
covariant base vectors of the natural basis, distributive w.r.t. vector addition, 13 Gauss, 60 inverse w.r.t. addition, 10, 12
140 divergence of a tensor £eld, 147 Gauss transformation, 59 inverse w.r.t. multiplication, 10
covariant coordinates, 81, 84 divergence of a vector £eld, 147 Gauss’s theorem, 155, 158 inversion, 48
covariant derivative, 146, 149 divergence theorem, 156 general eigenvalue problem, 69 invertible, 48
covariant metric coef£cients, 80, 83 domain, 8 general permutation symbol, 92 isomorphic, 29, 35
cross product, 87, 90, 96 gradient, 143 isomorphism, 35
dot product, 85
curl, 150 gradient of a vector of position, 144 isotropic, 129
dual space, 36, 97
curvature, 135 dual vector space, 36 higher order tensor, 127 isotropic tensor, 119
curvature of a curve, 136 dummy index, 78 homeomorphic, 30, 97 iterative process, 68
curved surface, 138 dyadic product, 94–96 homeomorphism, 30 Jacobian, 140
curvilinear coordinate system, 139 homogeneous, 32
curvilinear coordinates, 139, 144 eigenvalue, 65, 120
homogeneous linear equation system, 65 Kronecker delta, 52, 79, 119
eigenvalue problem, 22, 65, 122, 123
homogeneous norm, 18
de£nite metric, 16 eigenvalues, 22, 56 l-in£nity-norm, maximum-norm, 19
homomorphism, 32
de£nite norm, 18 eigenvector, 65, 120 Laplace operator, 150
Hooke’s law, 129
deformation energy, 129 eigenvector matrix, 70 laplacian of a scalar £eld, 150
Hölder sum inequality, 21
deformation gradient, 118 Einstein, 78 laplacian of a tensor £eld, 151
derivative of a scalar, 133 elastic, 129 identities for scalar products of tensors, 110 laplacian of a vector £eld, 150
derivative of a tensor, 134 elasticity tensor, 129 identities for tensor products, 106 left-hand Cauchy strain tensor, 117
derivative of a vector, 133 elasticity theory, 129 identity, 12 Leibnitz, 50
derivative w.r.t. a scalar variable, 133 elements, 6 identity element w.r.t. addition, 10 line element, 139
derivatives, 133 empty set, 7 identity element w.r.t. scalar multiplication, linear, 32
derivatives of base vectors, 141, 145 equilibrium conditions, 96 10, 12 linear algebra, 3
determinant, 50, 65, 89 equilibrium conditon of moments, 120 identity matrix, 41, 45, 80 linear combination, 15, 49, 71
determinant expansion by minors, 51 equilibrium system of external forces, 96 identity tensor, 112 linear dependence, 23, 30, 62
determinant of a tensor, 112 equlibirum system of forces, 96 image set, 8 linear equation system, 48
determinant of the contravariant metric coef- Euclidean norm, 22, 30, 85 in£nitesimal, 96 linear form, 36
£cients, 83 Euclidean space, 17 in£nitesimal tetrahedron, 96 linear independence, 23, 30
determinant of the Jacobian matrix, 140 Euclidean vector, 143 injective, 8 linear manifold, 15
deviator matrix, 46 Euclidean vector space, 26, 29, 143 inner prodcut space, 29 linear mapping, 32, 54, 97, 105, 121
deviator part of a tensor, 113 Euklidian matrix norm, 60 inner product, 25, 85 linear operator, 32
diagonal matrix, 41, 43 even permutation, 50 inner product of tensors, 110 linear space, 12
differential element of area, 139 exact differential, 133, 135 inner product space, 26, 27 linear subspace, 15

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
276 Index Index 277

linear transformation, 32 nonsingular square matrix, 55 potential character, 129 scalar triple product, 88, 90, 152
linear vector space, 12 nonsymmetric, 69 power series, 71 scalar-valued function of multiple variables,
linearity, 32, 34 nontrivial solution, 65 pre-multiplication, 45 134
linearly dependent, 15, 23, 49 norm, 18, 65 principal axes, 120 scalar-valued scalar function, 133
linearly independent, 15, 23, 48, 59, 66 norm of a tensor, 111 principal axes problem, 65 scalar-valued vector function, 143
lowering an index, 83 normal basis, 103 principal axis, 65 Schwarz inequality, 26, 111
normal unit, 136 principal stress directions, 122 second derivative, 133
main diagonal, 41, 65 normal unit vector, 121, 136, 138 principal stresses, 122 second order tensor, 96, 97, 127
map, 8 normal vector, 96, 135 product, 10, 12 second order tensor product, 105
mapping, 8 normed space, 18 proper orthogonal tensor, 116 section surface, 121
matrix, 40 null mapping, 33 semide£nite, 62
matrix calculus, 28 quadratic form, 26, 57, 62, 124 Serret-Frenet equations, 137
matrix multiplication, 42, 54 odd permutation, 50 quadratic value of the norm, 60 set, 6
matrix norm, 21, 22 one, 10 raising an index, 83 set theory, 6
matrix transpose, 41 operation, 9 range, 8 shear stresses, 121
maximum absolute column sum norm, 22 operation addition, 10 rank, 48 similar, 55, 69
maximum absolute row sum norm, 22 operation multiplication, 10 rational numbers, 7 similarity transformation, 55
maximum-norm, 20 order of a matrix, 40 Rayleigh quotient, 67, 68 simple fourth order tensor, 129
mean value, 153 origin, 12 real numbers, 7 simple second order tensor, 94, 99
metric, 16 orthogonal, 66 rectangular matrix, 40 simple third order tensor, 129
metric coef£cients, 138 orthogonal matrix, 57 reduction of rank, 66 skew part of a tensor, 115
metric space, 17 orthogonal tensor, 116 Riesz representation theorem, 36 smmetric tensor, 124
metric tensor of covariant coef£cients, 102 orthogonal transformation, 57 right-hand Cauchy strain tensor, 117 space, 12
mixed components, 99 orthonormal basis, 144 roots, 65 space curve, 135
mixed formulation of a second order tensor, outer product, 87 rotated coordinate system, 119 space of continuous functions, 14
99 overlined basis, 103 rotation matrix, 58 space of square matrices, 14
moment equilibrium conditon, 120 rotation of a vector £eld, 150 span, 15
moving trihedron, 137 parallelepiped, 88 special eigenvalue problem, 65
rotation transformation, 58
multiple roots, 70 partial derivatives, 134 spectral norm, 22
rotator, 116
multiplicative identity, 45 partial derivatives of base vectors, 145 square, 40
row, 40
multiplicative inverse, 10 permutation symbol, 87, 112, 128 square matrix, 40
row index, 40
permutation tensor, 128 Stoke’s theorem, 157
row matrix, 40, 45
n-tuple, 35 permutations, 50 strain tensor, 129
row vector, 40, 45
nabla operator, 146 point of origin, 28 stress state, 96
natural basis, 140 Poisson’s ratio, 129 scalar £eld, 143 stress tensor, 96, 129
natural numbers, 6, 7 polar decomposition, 117 scalar function, 133 stress vector, 96
naturals, 7 polynomial factorization, 66 scalar invariant, 149 subscript index, 78
negative de£nite, 62 polynomial of n-th degree, 65 scalar muliplicative identity, 12 subset, 7
Newton’s relation, 66 position vector, 135, 152 scalar multiplication, 9, 12, 13, 42 summation convention, 78
non empty set, 13 positive de£nite, 25, 61, 62, 111 scalar multiplication identity, 10 superscript index, 78
non-commutative, 45 positive metric, 16 scalar product, 9, 25, 85, 96 superset, 7
noncommutative, 69, 106 positive norm, 18 scalar product of tensors, 110 supremum, 22
nonsingular, 48, 59, 66 post-multiplication, 45 scalar product of two dyads, 111 surface, 152

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
278 Index

surface element, 152 transpose of a matrix product, 44


surface integral, 152 transpose of a tensor, 114
surjective, 8 triangle inequality, 16, 18, 19
symbols, 6 trivial solution, 65
symmetric, 25, 41
symmetric matrix, 41 union, 7
symmetric metric, 16 unit matrix, 80
symmetric part, 44 unitary space, 27
symmetric part of a tensor, 115 unitary vector space, 29
usual scalar product, 25
tangent unit, 135
vector, 12, 28, 127
tangent unit vector, 135
vector £eld, 143
tangent vector, 135
vector function, 135
Taylor series, 133, 153
vector norm, 18, 22
tensor, 96
vector of associated direction, 120
tensor axioms, 98
vector of position, 143
tensor £eld, 143
vector product, 87
tensor product, 105, 106
vector space, 12, 49
tensor product of two dyads, 106
vector space of linear mappings, 33
tensor space, 94
vector-valued function, 138
tensor with contravariant base vectors and
vector-valued function of multiple variables,
covariant coordinates, 100
134
tensor with covariant base vectors and con-
vector-valued scalar function, 133
travariant coordinates, 99
vector-valued vector function, 143
tensor-valued function of multiple variables,
visual space, 97
134
volume, 152
tensor-valued scalar function, 133
volume element, 152
tensor-valued vector function, 143
volume integral, 152
third order fundamental tensor, 128
volumetric matrix, 46
third order tensor, 127
volumetric part of a tensor, 113
topology, 30
von Mises, 68
torsion of a curve, 137
von Mises iteration, 68
total differential, 133
trace of a matrix, 43 whole numbers, 7
trace of a tensor, 112
transformation matrix, 55 Young’s modulus, 129
transformation of base vectors, 101
zero element, 10
transformation of the metric coef£cients, 84
zero vector, 12
transformation relations, 103
zeros, 65
transformation tensor, 101
transformed contravariant base vector, 103
transformed covariant base vector, 103
transpose of a matrix, 41

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Das könnte Ihnen auch gefallen