Beruflich Dokumente
Kultur Dokumente
002-Introduction To Continuum Mechanics-Vector and Tensor Calculus F. Barthold J. Stieghan p278
002-Introduction To Continuum Mechanics-Vector and Tensor Calculus F. Barthold J. Stieghan p278
Braunschweig
Introduction to
Continuum Mechanics
—
Vector and Tensor Calculus
Zusammenfassung
°2000
c Prof. Dr.-Ing. Franz-Joseph Barthold, M.Sc.
und
Dipl.-Ing. Jörg Stieghan, SFI
CSE – Computational Sciences in Engineering
Technische Universität Braunschweig
Bültenweg 17, 38 106 Braunschweig
Alle Rechte, insbesondere das der Übersetzung in fremde Sprachen, vorbehalten. Ohne Geneh-
migung der Autoren ist es nicht gestattet, dieses Heft ganz oder teilweise auf fotomechanischem
Wege (Fotokopie, Mikroskopie) zu vervielfältigen oder in elektronische Medien zu speichern.
Preface
Contents VII
List of Figures IX
List of Tables XI
1 Introduction 1
3 Matrix Calculus 37
3.1 De£nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.2 Some Basic Identities of Matrix Calculus . . . . . . . . . . . . . . . . . . . . . 42
3.3 Inverse of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4 Linear Mappings of an Af£ne Vector Spaces . . . . . . . . . . . . . . . . . . . . 54
3.5 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.6 Matrix Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
VII
VIII Contents
6 Exercises 159
6.1 Application of Matrix Calculus on Bars and Plane Trusses . . . . . . . . . . . . 162 2.1 Triangle inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.2 Calculating a Structure with the Eigenvalue Problem . . . . . . . . . . . . . . . 174 2.2 Hölder sum inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.3 Fundamentals of Tensors in Index Notation . . . . . . . . . . . . . . . . . . . . 182 2.3 Vector space R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.4 Various Products of Second Order Tensors . . . . . . . . . . . . . . . . . . . . . 190 2.4 Af£ne vector space R 2af f ine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.5 Deformation Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 2.5 The scalar product in an 2-dimensional Euclidean vector space. . . . . . . . . . . 30
6.6 The Moving Trihedron, Derivatives and Space Curves . . . . . . . . . . . . . . . 198
6.7 Tensors, Stresses and Cylindrical Coordinates . . . . . . . . . . . . . . . . . . . 210 3.1 Matrix multiplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Matrix multiplication for a composition of matrices. . . . . . . . . . . . . . . . . 55
A Formulary 227 3.3 Orthogonal transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
A.1 Formulary Tensor Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
A.2 Formulary Tensor Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 4.1 Example of co- and contravariant base vectors in E2 . . . . . . . . . . . . . . . . 81
4.2 Special case of a Cartesian basis. . . . . . . . . . . . . . . . . . . . . . . . . . . 82
B Nomenclature 237 4.3 Projection of a vector v on the dircetion of the vector u. . . . . . . . . . . . . . . 86
4.4 Resulting stress vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
References 239 4.5 Resulting stress vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.6 The polar decomposition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Glossary English – German 241
4.7 An example of the physical components of a second order tensor. . . . . . . . . . 119
Glossary German – English 257 4.8 Principal axis problem with Cartesian coordinates. . . . . . . . . . . . . . . . . 120
Chapter 1
Introduction
Chapter 2
For example about vector spaces H ALMOS [6], and A BRAHAM, M ARSDEN, and R ATIU [1].
And in german DE B OER [3], and S TEIN ET AL . [13].
In german about linear algebra JÄNICH [8], F ISCHER [4], F ISCHER [9], and B EUTELSPACHER
[2].
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
6 Chapter 2. Basics on Linear Algebra 2.1. Sets 7
• ∀ : The following condition(s) should hold for all mentioned elements. Example: The set of natural numbers. The set of natural numbers, or just the naturals, N,
sometimes also the whole numbers, is de£ned by
• =⇒ This arrow means that the term on the left-hand side implies the term on the right-hand
side. N = {1, 2, 3, . . .} . (2.1.9)
Sets could be given by . . . Unfortunately, zero "0"is sometimes also included in the list of natural numbers, then the set N
• an enumeration of its elements, e.g. is given by
N0 = {0, 1, 2, 3, . . .} . (2.1.10)
M1 = {1, 2, 3, } . (2.1.1)
Example: The set of integers. The set of the integers Z is given by
The set M1 consists of the elements 1, 2, 3
Z = {z | (z = 0) ∨ (z ∈ N) ∨ (−z ∈ N)} . (2.1.11)
N = {1, 2, 3, . . .} . (2.1.2)
Example: The set of rational numbers. The set of rational numbers Q is described by
The set N includes all integers larger or equal to one and it is also called the set of natural
nz o
numbers. Q= | (z ∈ Z) ∧ (n ∈ N) . (2.1.12)
n
• the description of the attributes of its elements, e.g.
Example: The set of real numbers. The set of real numbers is de£ned by
M2 = {m | (m ∈ M1 ) ∨ (−m ∈ M1 )} , (2.1.3)
= {1, 2, 3, −1, −2, −3} . R = {. . .} . (2.1.13)
Example: The set of complex numbers. The set of complex numbers is given by
The set M2 includes all elements m with the attribute, that m is an element of the set M 1 , or
© ¡ √ ¢ª
that −m is an element of the set M1 . And in this example these elements are just 1, 2, 3 and C = α + β i | (α, β ∈ R) ∧ i = −1 . (2.1.14)
−1, −2, −3.
1 2
Georg Cantor (1845-1918) The expression "if and only if" is often abbreviated with "iff".
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
8 Chapter 2. Basics on Linear Algebra 2.2. Mappings 9
2.2 Mappings The mappings idV : V → V and idW : W → W are the identity mappings in V, and W, i.e.
Let A and B be sets. Then a mapping, or just a map, of A on B is a function f , that assigns every Furthermore f must be surjective, in order to expand the existence of this mapping f −1 from
a ∈ A one unique f (a) ∈ B, f (V) ⊂ W to the whole set W. Then f : V → W is bijective, if and only if the mapping
(
A −→ B g : W → V with g ◦ f = idV and f ◦ g = idW exists. In this case is g = f −1 the inverse.
f: . (2.2.1)
a 7−→ f (a)
2.2.3 De£nition of an Operation
The set A is called the domain of the function f and the set B the range of the function f .
An operation or a combination, symbolized by ¦, over a set M is a mapping, that maps two
arbitrary elements of M onto one element of M.
2.2.2 Injective, Surjective and Bijective (
Let V and W be non empty sets. A mapping f between the two vector spaces V and W assigns M × M −→ M
¦: (2.2.11)
to every x ∈ V a unique y ∈ W, which is also mentioned by f (x) and it is called the range of x (m, n) 7−→ m ¦ n
(under f ). The set V is the domain and W is the range also called the image set of f . The usual
notation of a mapping (represented by the three parts, the rule of assignment f , the domain V 2.2.4 Examples of Operations
and the range W) is given by
( Example: The addition of natural numbers. The addition over the natural numbers N is an
V −→ W operation, because for every m ∈ N and every n ∈ N the sum (m + n) ∈ N is again a natural
f : V −→ W or f : . (2.2.2) number.
x 7−→ f (x)
Example: The subtraction of integers. The subtraction over the integers Z is an operation,
For every mapping f : V → W with the subsets A ⊂ V, and B ⊂ W the following de£nitions because for every a ∈ Z and every b ∈ Z the difference (a − b) ∈ Z is again an integer.
hold Example: The addition of continuous functions. Let Ck be the set of the k-times continuously
differentiable functions. The addition over Ck is an operation, because for every function f (x) ∈
f (A) := {f (x) ∈ W : x ∈ A} the range of A, and (2.2.3) Ck and every function g(x) ∈ Ck the sum (f + g) (x) = (f (x) + g(x)) is again a k–times
f −1 (B) := {x ∈ V : f (x) ∈ B} the range of B. (2.2.4) continuously differentiable function.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
10 Chapter 2. Basics on Linear Algebra 2.3. Fields 11
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
12 Chapter 2. Basics on Linear Algebra 2.4. Linear Spaces 13
2.4 Linear Spaces 7 . Axiom of Linear Spaces. The scalar muliplication is distributive w.r.t. the vector addition,
such that the distributive law is given by
2.4.1 De£nition of a Linear Space
α (x + y) = αx + αy ∀α ∈ F ; ∀x, y ∈ V . (S7)
Let F be a £eld. A linear space, vector space or linear vector space V over the £eld F is a set,
with an addition de£ned by
8 . Axiom of Linear Spaces. The muliplication by a vector is distributive w.r.t. the scalar
(
V × V −→ V addition, such that the distributive law is given by
+: ∀x, y ∈ V , (2.4.1)
(x, y) 7−→ x + y
(α + β) x = αx + βx ∀α, β ∈ F ; ∀x ∈ V . (S8)
a scalar multiplication given by
( Some simple conclusions are given by
F × V −→ V
·: ∀α ∈ F ; ∀x ∈ V , (2.4.2)
(α, x) 7−→ αx 0 · x = 0 ∀x ∈ V ; 0 ∈ F, (2.4.3)
(−1) x = −x ∀x ∈ V ; − 1 ∈ F, (2.4.4)
and satis£es the following axioms. The elements x, y etc. of the V are called vectors. To every
pair, x and y of vectors in the space V there corresponds a vector x + y, called the sum of x and α · 0 = 0 α ∈ F, (2.4.5)
y, in such a way that:
1 . Axiom of Linear Spaces. The addition is associative , and if
3 . Axiom of Linear Spaces. There exists a unique vector 0 ∈ V, called zero vector or the • Starting with the usual 3-dimensional vector space these axioms describe a generalized
origin of the space V, such that de£nition of a vector space as a set of arbitrary elements x ∈ V. The classic example is
the usual 3-dimensional Euclidean vector space E3 with the vectors x, y.
x + 0 = x = 0 + x ∀x ∈ V . (S3)
4 . Axiom of Linear Spaces. To every vector x ∈ V there corresponds a unique vector −x, • The de£nition says nothing about the character of the elements x ∈ V of the vector space.
called the additive inverse, such that
• The de£nition implies only the existence of an addition of two elements of the V and the
x + (−x) = 0 ∀x ∈ V . (S4) existence of a scalar multiplication, which both do not lead to results out of the vector
space V and that the axioms of vector space (S1)-(S8) hold.
To every pair, α and x, where α is a scalar quantity and x a vector in V, there corresponds a
vector αx, called the product of α and x, in such way that:
• The de£nition only implies that the vector space V is a non empty set, but nothing about
5 . Axiom of Linear Spaces. The multiplication by scalar quantities is associative "how large"it is.
α (βx) = (αβ) x ∀α, β ∈ F ; ∀x ∈ V . (S5)
• F = R, i.e. only vector spaces over the £eld of real numbers R are examined, no look at
6 . Axiom of Linear Spaces. There exists a unique non-zero scalar 1 ∈ F, called identity or the vector spaces over the £eld of complex numbers C.
identity element w.r.t. the scalar multiplication on the space V, such that the scalar multplicative
identity is given by • The dimension dim V of the vector space V should be £nite, i.e. dim V = n for an arbitrary
x1 = x = 1x ∀x ∈ V . (S6) n ∈ N, the set of natural number.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
14 Chapter 2. Basics on Linear Algebra 2.4. Linear Spaces 15
2.4.2 Examples of Linear Spaces 2.4.3 Linear Subspace and Linear Manifold
Example: The space of n-tuples. The space R of the dimension n with the usual addition
n
Let V be a linear space over the £eld F. A subset W ⊆ V is called a linear subspace or a linear
manifold of V, if the set is not empty, W 6= ∅, and the linear combination is again a vector of the
x + y = [x1 + y1 , . . . , xn + yn ] , linear subspace,
ax + by ∈ W ∀x, y ∈ W ; ∀a, b ∈ F . (2.4.10)
and the usual scalar multiplication
(f + g) = f (x) + g (x) ,
(αf ) = αf (x) .
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
16 Chapter 2. Basics on Linear Algebra 2.5. Metric Spaces 17
ρ (x, y) ρ (y, z)
x
ρ (x, z)
z
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
18 Chapter 2. Basics on Linear Algebra 2.6. Normed Spaces 19
4 . Axiom of Norms. The norm satis£es the triangle inequality, the L1-norm,
Z
kx + yk ≤ kxk + kyk ∀x, y ∈ V . (N4)
kxk = |x| dΩ, (2.6.10)
Some simple conclusions are given by Ω
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
20 Chapter 2. Basics on Linear Algebra 2.6. Normed Spaces 21
2.6.4 Hölder Sum Inequality and Cauchy’s Inequality For the special case with p = q = 2 the Hölder sum inequality, see equation (2.6.19) transforms
into the Cauchy’s inequality,
Let p and q be two scalar quantities, and the relationship between them is de£ned by
à ! 12 à ! 12
1 1 X X 2
X 2
+ = 1 , with p > 1, q > 1. (2.6.15) |xj yj | ≤ |xj | |yj | . (2.6.20)
p q j j j
In the £rst quadrant of a coordinate system the graph y = x p−1 and the straight lines x = ξ, and
y = η with ξ > 0, and η > 0 are displayed. The area enclosed by this two straight lines, the 2.6.5 Matrix Norms
curve and the axis of the coordinate system is at least the area of the rectangle given by ξη,
In the same way like the vector norm the norm of a matrix A is introduced. This matrix norm
ξ p ηq is written kAk. The characterictics of the matrix norm are given below, and start with the zero
ξη ≤ + . (2.6.16)
p q matrix 0, and the condition A 6= 0,
For the real or complex quantities xj , and yj , which are not all equal to zero, the ξ, and η could kAk > 0, (2.6.21)
be described by
|xj | |yj |
ξ=³ ´ p1 , and η = ³P ´1 . (2.6.17) and with an arbitrary scalar quantity α,
P p q q
j |xj | j |yj |
kαAk = |α| kAk , (2.6.22)
Inserting the relations of equations (2.6.17) in (2.6.16), and summing the terms with the index j,
kA + Bk ≤ kAk kBk , (2.6.23)
implies
P P P kA Bk ≤ kAk kBk . (2.6.24)
p q
j |xj | |yj | j |xj | j |yj |
³P ´ p1 ³P ´ 1q ≤ ³P ´ + ³P ´ = 1. (2.6.18) In addition for the matrix norms and in opposite to vector norms the last axiom hold. If this
p
|x j | p
|y j | q p j |xj | q j |yj |q condition holds, then the norm is called to be multiplicative. Some usual norms, which satisfy
j j
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
22 Chapter 2. Basics on Linear Algebra 2.6. Normed Spaces 23
the conditions (2.6.21)-(2.6.22) are given below. With n being the number of rows of the matrix Vector norms Compatible matrix norm Description
A, the absolute norm is given by kxk = max |xi | kAkM = M (A) absolute norm
kAkR = R (A) = sup (A) maximum absolute row sum norm
kAkM = M (A) = n max |aik | . (2.6.25)
P
The maximum absolute row sum norm is given by kxk = |xi | kAkM = M (A) absolute norm
kAkC = C (A) = sup (A) maximum absolute column sum norm
n
X
kAkR = R (A) = max |aik | . (2.6.26) qP
i
k=1 kxk = |xi |2 kAkM = M (A) absolute norm
kAkN = N (A) Euclidean norm
The maximum absolute column sum norm is given by
kAkH = H (A) = sup (A) spectral norm
n
X
kAkC = C (A) = max |aik | . (2.6.27) Table 2.1: Compatibility of norms.
k
i=1
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
24 Chapter 2. Basics on Linear Algebra 2.7. Inner Product Spaces 25
The n linearly independent vectors ai with i = 1, . . . , n span a n-dimensional vector space. This 2.7 Inner Product Spaces
set of n linearly independent vectors could be used as a basis of this vector space, in order to
describe another vector an+1 in this space, 2.7.1 De£nition of a Scalar Product
n
X Let V be a linear space over the £eld of real numbers R. A scalar product 4 or inner product is a
an+1 = β k ak , and an+1 ∈ Rn . (2.6.37) mapping
k=1
(
V × V −→ R
h, i: . (2.7.1)
(x, y) 7−→ hx, yi
The scalar product satis£es the following relations for all vectors x, y, z ∈ V and all scalar
quantities α, β ∈ R:
1 . Axiom of Inner Products. The scalar product is bilinear,
Theorem 2.1. The inner product induces a norm and with this a metric, too. The scalar product
1
kxk = hx, xi 2 de£nes a scalar-valued function, which satis£es the axioms of a norm!
hx, yi = x1 y1 + x2 y2 (2.7.3)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
26 Chapter 2. Basics on Linear Algebra 2.7. Inner Product Spaces 27
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
28 Chapter 2. Basics on Linear Algebra 2.8. Af£ne Vector Space and the Euclidean Vector Space 29
−→
2.8 Af£ne Vector Space and the Euclidean Vector Space assigns to every pair of points P and Q ∈ W ⊂ Rnaf£ne a vector P Q ∈ V. And the mapping also
satis£es the following conditions:
2.8.1 De£nition of an Af£ne Vector Space
• For every constant P the assignment
In matrix calculus an n-tuple a ∈ Rn over the £eld of real numbers R is studied, i.e.
−→
ΠP : Q −→ P Q, (2.8.4)
ai ∈ R , and i = 1, . . . , n. (2.8.1)
is a bijective mapping, i.e. the inverse Π−1
P exists.
One of this n-tuple, represented by a column matrix, or also called a column vector or just vector,
could describe an af£ne vector, if an point of origin in a geometric sense and a displacement of • Every P , Q and R ∈ W satisfy
origin are established. A set W is called an af£ne vector space over the vector space V ⊂ R n , if
−→ −→ −→
P Q + QR = P R, (2.8.5)
6 V ⊂ R2
~ − P~ )
(R
and
± −→
ΠP : W −→ V , with ΠP Q = P Q , and Q ∈ W. (2.8.6)
I −→ For all P , Q and R ∈ W ⊂ Rnaf£ne the axioms of a linear space (S1)-(S4) for the addition hold
~c = P R ~ − P~ )
(Q
1
−→ a + b = c −→ ai + bi = ci , with i = 1, . . . , n, (2.8.7)
~b = −→ ~a = P Q
QR -
and (S5)-(S8) for the scalar multiplication
P~
∗ ∗
αa = a −→ αai = ai . (2.8.8)
Figure 2.3: Vector space R . 2
• Two normed space V and W over the same £eld are isomorphic, if and only if there exists
a mapping given by a linear mapping f from V to W, such that the following inequality holds for two constants
m and M in every point x ∈ W,
W × W −→ V , (2.8.2)
Rnaf£ne −→ Rn , (2.8.3) m · kxk ≤ kf (x)k ≤ M · kxk . (2.8.11)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
30 Chapter 2. Basics on Linear Algebra 2.8. Af£ne Vector Space and the Euclidean Vector Space 31
Á • The set of all linear comnbinations of vectors v1 , v2 , . . . , vn span a subspace. The dimen-
v sion of this subspace is equal to the number of vectors n, which span the largest linearly
independent space. The dimension of this subspace is at most n.
ϕ 1
• Every n + 1 vectors of the Euclidean vector space v ∈ En with the dimension n must be
u linearly dependent, i.e. the vector v = vn+1 could be described by a linear combination of
O the vectors v1 , v2 , . . . , vn ,
λv + a1 v1 + a2 v2 + . . . + an vn = 0, (2.8.15)
Figure 2.5: The scalar product in an 2-dimensional Euclidean vector space. 1¡ ¢
v = − a1 v 1 + a 2 v 2 + . . . + a n v n . (2.8.16)
λ
• Every two real n-dimensional normed spaces are isomorphic. For example two subspaces • The vectors zi given by
of the vector space Rn . 1
z i = − ai v i , with i = 1, . . . , n, (2.8.17)
Bellow in most cases the Euclidean norm with p = 2 is used to describe the relationships between λ
the elements of the af£ne (normed) vector space x ∈ R naf£ne and the elements of the Euclidean are called the components of the vector v in the Euclidean vector space E n .
vector space v ∈ En . With this condition the relations between a norm, like in section (2.6) and
an inner product is given by • Every n linearly independent vectors vi of dimension n in the Euclidean vector space En
are called to be a basis of the Euclidean vector space En . The vectors gi = vi are called
kxk2 = x · x, (2.8.12) the base vectors of the Euclidean vector space En ,
n
and X ai
v = v 1 g1 + v 2 g2 + . . . + v n gn = v i gi , with vi = − . (2.8.18)
q
i=1
λ
kxk = kxk2 = x2i . (2.8.13)
The v i gi are called to be the components and the v i are called to be the coordinates of the
In this case it is possible to de£ne a bijective mapping between the n-dimensional af£ne vector
vector v w.r.t. the basis gi . Sometimes the scalar quantities v i are called the components
space and the Euclidean vector space. This bijectivity is called the topology a homeomorphism,
of the vector v w.r.t. to the basis gi , too.
and the spaces are called to be homeomorphic. If two spaces are homeomorphic, then in both
spaces the same axioms hold.
a1 v1 + a2 v2 + . . . + an vn = 0. (2.8.14)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
32 Chapter 2. Basics on Linear Algebra 2.9. Linear Mappings and the Vector Space of Linear Mappings 33
2.9 Linear Mappings and the Vector Space of Linear Map- mappings f1 : V → W and f2 : V → W should be a linear mapping (f1 + f2 ) : V → W, too.
For an arbitrary vector x ∈ V the pointwise addition is given by
pings
(f1 + f2 ) (x) := f1 (x) + f2 (x) ∀x ∈ V , (L3)
2.9.1 De£nition of a Linear Mapping
for all linear mappings f1 , f2 from V to W. The sum f1 + f2 is linear, because both mappings f1
Let V and W be two vector spaces over the £eld F. A mapping f : V → W from elements of and f2 are linear, i.e. (f1 + f2 ) is a linear mapping, too.
the vector space V to the elements of the vector space W is linear and called a linear mapping,
if for all x, y ∈ V and for all α ∈ R the following axioms hold: 4 . Axiom of Linear Mappings (De£nition of the scalar multiplication of linear mappings).
Furthermore a product of a scalar quantity αinR and a linear mapping f : V → W is de£ned
1 . Axiom of Linear Mappings (Additive w.r.t. the vector addition). The mapping f is by
additive w.r.t. the vector addition, (αf ) (x) := αf (x) ∀α ∈ R ; ∀x ∈ V . (L4)
f (x + y) = f (x) + f (y) ∀x, y ∈ V . (L1) If the mapping f is linear, then results immediatly, that the mapping (αf ) is linear, too.
5 . Axiom of Linear Mappings (Satisfaction of the axioms of a linear vector space). The
2 . Axiom of Linear Mappings (Homogeneity of linear mappings). The mapping f is homo- de£nitions (L3) and (L4) satisfy all linear vector space axioms given by (S1)-(S8). This is easy
geneous w.r.t. scalar multiplication, to prove by computing the equations (S1)-(S8). If V and W are two vector spaces over the £eld
F, then the set L of all linear mappings f : V → W from V to W,
f (αx) = αf (x) ∀α ∈ F ; ∀x ∈ V . (L2)
L (V, W) is a linear vector space. (L5)
2.9.1.0.2 Remarks:
The identity element w.r.t the addition of a vector space L (V, W) is the null mapping 0, which
• The linearity of the mapping f : V → W results of being additive (L1), and homogeneous sends every element from V to the zero vector 0 ∈ W.
(L2).
• Because the action of the mapping f is only de£ned on elements of the vector space V, it
2.9.3 The Basis of the Vector Space of Linear Mappings
is necessary that, the sum vector x + y ∈ V (for every x, y ∈ V) and the scalar multiplied
vector αx ∈ V (for every αf ∈ R) are elements of the vector space V, too. And with this
postulation the set V must be a vector space!
• With the same arguments for the ranges f (x), f (y), and f (x + y), also for the ranges
f (αx), and αf (x) in W the set W must be a vector space!
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
34 Chapter 2. Basics on Linear Algebra 2.9. Linear Mappings and the Vector Space of Linear Mappings 35
2.9.4 De£nition of a Composition of Linear Mappings 2.9.6 The Representation of a Linear Mapping by a Matrix
Till now only an addition of linear mappings and a multiplication with a scalar quantity are Let x and y be two arbitrary elements of the linear vector space V given by
de£ned. The next step is to de£ne a "multiplication"of two linear mappings, this combination of n n
two functions to form a new single function is called a composition. Let f 1 : V → W be a linear X X
x= xi ei , and y= y i ei . (2.9.7)
mapping and furthermore let f2 : X → Y be linear, too. If the image set W of the linear mapping i=1 i=1
f1 also the domain of the linear mapping f2 , i.e. W = X, then the composition f1 ◦ f2 : V → Y
is de£ned by Let L be a linear mapping from V in itself
(f1 ◦ f2 ) (x) = f1 (f2 (x)) ∀x ∈ V . (2.9.1)
L = αij ϕij . (2.9.8)
Because of the linearity of the mappings f1 and f2 the composition f1 ◦ f2 is also linear.
2.9.4.0.3 Remarks:
• The composition f1 ◦ f2 is also written as f1 f2 and it is sometimes called the product of f1 y = L (x) , (2.9.9)
¡ ¢¡ ¢
and f2 . y i ei = αkl ϕkl xj ej
¡ ¢
• If this products exist (, i.e. the domains and image sets of the linear mappings match like = αkl ϕkl xj ej
in the de£nition), then the following identities hold: = αkl xj ϕkl (ej )
f1 (f2 f3 ) = (f1 f2 ) f3 (2.9.2) y=. (2.9.10)
f1 (f2 + f3 ) = f1 f2 + f1 f3 (2.9.3)
(f1 f2 ) f3 = f1 f3 + f2 f3 (2.9.4) 2.9.7 The Isomorphism of Vector Spaces
α (f1 f2 ) = α (f1 f2 ) = f1 (αf2 ) (2.9.5) The term "bijectivity"and the attributes of a bijective linear mapping f : V → W imply the
following de£ntion. A bijective linear mapping f : V → W is also called an isomorphism of the
• If all sets are equal V = W = X = Y, then this products exist, i.e. all the linear mappings vector spaces V and W). The spaces V and W are said to be isomorphic.
map the vector space V onto itself n-tuple
f ∈ L (V, V) =: L (V) . (2.9.6)
In this case with f1 ∈ L (V, V), and f2 ∈ L (V, V) the composition f1 ◦ f2 ∈ L (V, V) is a x1 x1
linear mapping from the vector space V to itself, too. i .. ..
x = x ei ←→ . ←→ x = . , (2.9.11)
xn xn
2.9.5 The Attributes of a Linear Mapping with x∈V dim V = n , the space of all n-tuples, x ∈ Rn .
• Let V and W be vector spaces over the F and L (V, W) the vector space of all linear map-
pings f : V → W. Because L (V, W) is a vector space, the addition and the multiplication
with a scalar for all elements of L, i.e. all linear mappings f : V → W, is again a linear
mapping from V to W.
• An arbitrary composition of linear mappings, if it exists, is again a linear mapping from
one vector space to another vector space. If the mappings f : V → W form a space in itself
exist, then every composition of this mappings exist and is again linear, i.e. the mapping
is again an element of L (V, V).
• The existence of an inverse, i.e. a reverse linear mapping from W to V, and denoted by
f −1 : W → V, is discussed in the following section.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
36 Chapter 2. Basics on Linear Algebra
The relations between the continuous linear functionals f : R n → R and the scalar products h , i
Matrix Calculus
de£ned in the R n are given by the Riesz representation theorem, i.e.
Theorem 2.2 (Riesz representation theorem). Every continuous linear functional f : R n → R
could be represented by For example G ILBERT [5], and K RAUS [10]. And in german S TEIN ET AL . [13], and Z URMÜHL
f (x) = hx, ui ∀x ∈ Rn , (2.10.2) [14].
and the vector u is uniquely de£ned by f (x).
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
40 Chapter 3. Matrix Calculus 3.1. De£nitions 41
The index i is the row index and k is the column index. This matrix is called a m × n-matrix. 3.1.6 Identity Matrix
The order of a matrix is given by the number of rows and columns.
The identity matrix is a diagonal matrix given by
1 0 ··· 0
3.1.1 Rectangular Matrix 0 1 · · · 0
(
1ik = 0 , iff i 6= k
1 = .. .. . . .. = . (3.1.7)
Something like in equation (3.1.1) is called a rectangular matrix. . . . . 1ik = 1 , iff i=k
0 0 ··· 1
A m × 1-matrix is called a column matrix or a column vector a given by It is a kind of re¤ection at the main diagonal.
a1 3.1.9 Antisymmetric Matrix
a2 £ ¤T
a = .. = a1 a2 · · · am . (3.1.3) A square matrix is called to be antisymmetric, if the following equation is satis£ed
.
am AT = −A. (3.1.10)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
42 Chapter 3. Matrix Calculus 3.2. Some Basic Identities of Matrix Calculus 43
The product of two matrices A and B is de£ned by the matrix multiplication tr (A + B) = tr A + tr B. (3.2.11)
A(l×m) B (m×n) = C (l×n) (3.2.5) Computing the trace of a matrix product is commutative,
Xm
Cik = Aiν Bνk . (3.2.6)
tr (A B) = tr (B A) , (3.2.12)
ν=1
It is important to notice the condition, that the number of columns of the £rst matrix equals the but still the matrix multiplication in general is not commutative, see equation (3.2.9),
number of rows of the second matrix, see index m in equation (3.2.5). Matrix multiplication is
associative A B 6= B A. (3.2.13)
(A B) C = A (B C) , (3.2.7)
The trace of an identity matrix of dimension n is de£ned by,
and also matrix multiplication is distributive
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
44 Chapter 3. Matrix Calculus 3.2. Some Basic Identities of Matrix Calculus 45
3.2.5 Symmetric and Antisymmetric Square Matrices The transpose of the matrix C is given by
Every matrix M could be described as a sum of a symmetric part S and an antisymmetric part A m
X m
X
C T = [Cki ] , and Cki = Akν Bνi = Biν Aνk ,
M (n×n) = S (n×n) + A(n×n) . (3.2.15) ν=1 ν=1
The symmetric part is de£ned like in equation (3.1.9), and £nally in symbol notation
T
S=S , i.e. S ik = S ki . (3.2.16)
C T = (A B)T = B T AT .
The antisymmetric part is de£ned like in equation (3.1.10),
A = −AT , i.e. Aik = −Aki , and Aii = 0. (3.2.17) 3.2.7 Multiplication with the Identity Matrix
For example an antisymmetric matrix looks like this The identity matrix is the multiplicative identity w.r.t. the matrix multiplication
0 1 5 A 1 = 1 A = A. (3.2.24)
A = −1 0 −2
−5 2 0.
3.2.8 Multiplication with a Diagonal Matrix
The symmetric and antisymmetric part of a square matrix are de£ned by A diagonal matrix D is given by
1¡ ¢ 1¡ ¢
M= M + MT + M − M T = S + A. (3.2.18) D11 0 ··· 0
2 2 0 D22
··· 0
The transpose of the symmetric and the antisymmetric part of a square matrix are given by, D = [Dik ] = .. .. ... .. . (3.2.25)
. . .
1¡ ¢T 1¡ T ¢ 0 0 · · · Dnn
ST = M + MT = M + M = S, and (3.2.19) (n×n)
2 2
1¡ ¢T 1¡ T ¢ Because the matrix multiplication is non-commutative, there exists two possibilities two compute
AT = M − MT = M − M = −A. (3.2.20)
2 2 the product of two matrices. The £rst possibility is the multiplication with the diagonal matrix
from the left-hand side, this is called the pre-multiplication
3.2.6 Transpose of a Matrix Product
D11 a1 a1
The transpose of a matrix product of two matrices is de£ned by D22 a a
2 2
D A = .. ; A = .. . (3.2.26)
(A B)T = B T AT , and (3.2.21) . .
¡ ¢T ¡ ¢T Dnn an an
⇒ AT B T = B AT = B A, (3.2.22)
Each row of the matrix A, described by a so called row vector ai or a row matrix
for more than two matrices
£ ¤
ai = ai1 ai2 · · · ain , (3.2.27)
(A B C) = C T B T C T , etc. (3.2.23)
is multiplied with the matching diagonal element Dii . The result is the matrix D A in equation
The proof starts with the l × n-matrix C, which is given by the two matrices A and B
(3.2.26). The second possibility is the multiplication with the diagonal matrix from the right-
m
X hand side, this is called the post-multiplication
C (l×n) = A(l×m) B (m×n) ; Cik = Aiν Bνk . £ ¤ £ ¤
ν=1 A D = a1 D11 a2 D22 · · · an Dnn ; A = a1 , a2 , . . . , an . (3.2.28)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
46 Chapter 3. Matrix Calculus 3.2. Some Basic Identities of Matrix Calculus 47
Each column of the matrix A, described by a so called column vector ai or a column matrix The ball part is given by
ai1 n µ ¶
ai2 1X 1 1
Vii = Sii = tr S , or V = tr S . (3.2.32)
ai = .. , (3.2.29) n i=1 n n
.
ain
The deviator part is the difference between the matrix S and the volumetric part
is multiplied with the matching diagonal element Dii . The result is the matrix A D in equation
µ ¶
(3.2.28). 1
Rii = Sii − Vii , R=S− tr S , (3.2.33)
n
3.2.9 Exchanging Columns and Rows of a Matrix
the non-diagonal elements of the deviator are the elements of the former matrix S
Exchanging the i-th and the j-th row of the matrix A is realized by the pre-multiplication with
the matrix T Rik = Sik , i 6= k ,and R = RT . (3.2.34)
T (n×n) A(n×n) = Â(n×n) (3.2.30)
i j
The diagonal elements of the volumetric part are all equal
.. ..
1 . . a11 a12 · · · · · · a1n a1
i · · ·
ai aj V
0 ··· ··· 1 · · ·
.. .. .. ..
V
. 1 . . . V = [V δik ] = ... . (3.2.35)
= .
... ..
.. . .
. 1 ..
V
j · · · 1 ··· ··· 0
· · · aj ai
.. .. an an
. . 1
The matrix T is same as its inverse T = T −1 . And with another matrix T̃ the i-th and the j-th
row are exchanged, too.
i j
. .
1
.. ..
i · · · 0 · · · · · · −1 · · ·
.. . . .. ³ ´−1
T̃ = . . . , T̃ = T̃ T
.. . . . ..
. .
j · · · 1 · · · · · · 0 · · ·
.. ..
. . 1
Furthermore the old j-th row is multplied by −1. Finally post-multiplication with such a matrix
T̃ exchanges the columns i and j of a matrix.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
48 Chapter 3. Matrix Calculus 3.3. Inverse of a Square Matrix 49
3.3 Inverse of a Square Matrix 3.3.2.0.7 4. By exchanging two rows (or columns) the sign of the determinant changes.
3.3.1 De£nition of the Inverse 3.3.2.0.8 5. Multiplication with a scalar quantity is de£ned by
A linear equation system is given by ¡ ¢
det λA(n×n) = λn det A(n×n) ; λ ∈ R. (3.3.7)
A(n×n) x(n×1) = y (n×1) ; A = [Aik ] . (3.3.1)
3.3.2.0.9 6. The determinant of a product of two matrices is given by
The inversion of this system of equations introduces the inverse of a matrix A−1 .
det (A B) = det (B A) = det A det B. (3.3.8)
x = A−1 y ; A−1 := X = [Xik ] . (3.3.2)
The pre-multiplication of A x = y with the inverse A−1 implies 3.3.3 Derivation of the Elements of the Inverse of a Matrix
A−1 A x = A−1 y → x = A−1 y , and A−1 A = 1. (3.3.3) The n column vectors (n-tuples)
Pn ak (k = 1, . . . , n) of the matrix A and ak ∈ Rn are linearly
independent, i.e. the sum ν=1 αν aν 6= 0 is for all αν equal to zero
Finally the inverse of a matrix is de£ned by the following relations between a matrix A and its
inverse A−1 A1k
¡ −1 ¢−1 £ ¤ A2k
A = A, (3.3.4) A(n×n) = a1 a2 · · · ak · · · an ; ak = .. .
(3.3.9)
.
A−1 A = A A−1 , (3.3.5)
Ank
[Aik ] [Xki ] = 1. (3.3.6)
The ak span a n-dimensional vector space. Than every other vector, the (n + 1)-th vector an+1 =
The solution of the linear equation system could only exist, if and only if the inverse A−1 exists.
r ∈ Rn , could be described by an unique linear combination of the former vectors ak , i.e. the
The inverse A−1 of a square matrix A exists, if the matrix is nonsingular (invertible), vector r ∈ Rn is linearly dependent of the n vectors ak ∈ Rn . For that reason the linear equation
i.e. det A 6= 0; or the difference between the rank and the number of columns resp. system
rows d = n − r of the matrix A must be equal to zero, i.e. the rank r of the matrix A(n×n) x(n×1) = r(n×1) ; r 6= 0 ; r ∈ Rn ; x ∈ Rn (3.3.10)
A(n×n) must be equal to the number n of columns or rows (r = n). The rank of a
has an unique solution
rectangular matrix A(n×n) is de£ned by the largest number of linearly independent
A−1 := X , A X = 1. (3.3.11)
rows (number of rows m) or columns (number of columns n). The smaller value of
m and n is the characteristic value of the rank. To compute the inverse X from the equation A X = 1 it is necessary to solve n-times the linear
equation system with the unit vector 1j (j = 1, . . . , n) on the right-hand side. Then the j-th
equation system is given by
3.3.2 Important Identities of Determinants
3.3.2.0.4 1. The determinant stays the same, if a row (or a column) is added to another row X1j 0
X2j 0
(or column). . .
£ 1 2 ¤.X. .
.
a a · · · ak · · · an kj = ,
3.3.2.0.5 2. The determinante equals zero, if the expanded row (or column) is exchanged by 1
. .
another row (or column). In this case two rows (or columns) are the same, i.e. these rows (or .. ..
columns) are linearly dependent. Xnj 0
A X j = 1j , (3.3.12)
3.3.2.0.6 3. This is the generaliziation of the £rst and second rule. The determinant equals
zero, if the rows (or columns) of the matrix are linearly dependent. In this case it is possible to with the inverse represented by its column vectors
produce a row (or column) with all elements equal to zero, and if the determinant is expanded £ ¤
about this row (or column) the determinant itself equals zero. X = A−1 = X 1 X 2 · · · X j · · · X n (3.3.13)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
50 Chapter 3. Matrix Calculus 3.3. Inverse of a Square Matrix 51
and the identity matrix also represented by its column vectors This result is the same like the result by using the determinant expansion by minors. In this
£ ¤ example the determinant is expanded about its £rst row. In general the determinant is expanded
1 = 1 1 1 2 · · · 1 j · · · 1n , (3.3.14) about the i-th row like this
n · ¸ n
X ∗ X
and £nally the identity matrix itself det A(n×n) = Aij det Aij (−1)i+j = Aij Âij . (3.3.21)
j=1 j=1
1 0 ··· 0 · ¸
∗
0 1 ··· 0
1 = .. .. . .
.. . (3.3.15) Aij is a matrix, created by eliminating the i-th row and the j-th column of A. The factor Âij
. . . .
is the so called cofactor of the element Aij . For this factor again the determinant expansion is
0 0 ··· 1
used.
The solutions represented by the vectors X j could be computed with the determinants. Example: Simple 3 × 3-matrix. The matrix A
1 4 0
3.3.4 Computing the Elements of the Inverse with Determinants A= 2 1 1
The determinant det A(n×n) of a square matrix Aik ∈ R is a real number, de£ned by Leibnitz like −1 0 2,
this X is expanded about the £rst row
det A(n×n) = (−1)I A1j A2k A3l · · · Ann . (3.3.16)
· ¸
The indices j, k, l, . . . , n are rearranged in all permutations of the numbers 1, 2, . . . , n and I is the 1 1
det A =1 · (−1)1+1 · det
total number of inversions. The determinant det A is established as the sum of all (n!) elements. 0 2
· ¸
In every case there exists the same number of positive and negative terms. For example the 2 1
+ 4 · (−1)1+2 · det
determinant det A of an 3 × 3-matrix is computed −1 2
· ¸
2 1
A11 A12 A13 + 0 · (−1)1+3 · det ,
−1 0
A(3×3) = A = A21 A22 A23 . (3.3.17)
A31 A32 A33 and £nally the result is
An even permutation of the numbers 1, 2, . . . , 3 is a sequence like this,
=1 · 1 · 2 − 4 · 1 · 5 + 0 · 1 · 1 = −18.
1→2→3 , or 2→3→1 , or 3 → 1 → 2, (3.3.18)
In order to compute the inverse X of a matrix the determinant det A is calculated by expanding
and an odd permutation is a sequence like this, the i-th row of the matrix A. The matrix A(n×n) is assumpted to be linearly independent, i.e.
det A 6= 0. Equation (3.3.21) implies
3→2→1 , or 2→1→3 , or 1 → 3 → 2. (3.3.19)
n
X
For this example with n = 3 equation (3.3.16) becomes Aij Âij = det A = 1 · det A. (3.3.22)
j=1
det A =A11 A22 A33 + A12 A23 A31 + A13 A21 A32
The second rule about determinants implies that exchanging the expanded row i by the row k
− A31 A22 A13 − A32 A23 A11 − A33 A21 A12 leads to an linearly dependent matrix,
=A11 (A22 A33 − A32 A23 )
n
X
− A12 (A21 A33 − A31 A23 ) Akj Âij = 0 = 0 · det A if i 6= k, (3.3.23)
+ A13 (A21 A32 − A31 A22 ) . (3.3.20) j=1
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
52 Chapter 3. Matrix Calculus 3.3. Inverse of a Square Matrix 53
The elements xjk of the inverse X = A−1 are de£ned by 3.3.5.0.12 3. The order of inversion and transposition could be exchanged,
n
¡ −1 ¢T ¡ T ¢−1
X A = A . (3.3.33)
Aij Xjk = δik ; A X = 1, (3.3.27)
j=1 Proof. The inverse is de£ned by
¡ ¢T ¡ ¢T
and comparing (3.3.26) with (3.3.27) implies A A−1 = 1 = A A−1 = A−1 AT ,
Âjk
Xjk = ; [Xjk ] = A−1 , (3.3.29) 3.3.5.0.13 4. If the matrix A is symmetric, then the inverse matrix A−1 is symmetric, too,
det A
¡ ¢T
and £nally A = AT → A−1 = A−1 . (3.3.34)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
54 Chapter 3. Matrix Calculus 3.4. Linear Mappings of an Af£ne Vector Spaces 55
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
56 Chapter 3. Matrix Calculus 3.4. Linear Mappings of an Af£ne Vector Spaces 57
and comparing this with equation (3.4.20) implies The vectors x and y will be transformed in the similar way and in the congruent way with the so
called orthogonal matrix T = Q, det Q 6= 0.
T T y = T T A T x̄,
x = Q x̄ , y = Q ȳ ⇒ ȳ = Q−1 y → similar transformation, (3.4.29)
and £nally T
ȳ = Q y → congruent transformation. (3.4.30)
ȳ = A x̄, (3.4.21)
For the orthogonal transformation the transformations matrices are called to be orthogonal, if
A = TTA T. (3.4.22) they ful£ll the relations
The matrix product A = T T A T is called the congruence transformation of the matrix A. The Q−1 = QT or Q QT = 1. (3.4.31)
matrices A and A are called to be congruent matrices. For the orthogonal matrices the following identities hold.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
58 Chapter 3. Matrix Calculus 3.4. Linear Mappings of an Af£ne Vector Spaces 59
• If a matrix is orthogonal, its inverse equals the transpose of the matrix. and
• The product of orthogonal matrices is again orthogonal. The inversion of equation (3.4.33) with the aid of determinants implies
Solving this equations step by step, starting with computing the determinant,
y2 sin α
:
y1 cos α :9 det Q = cos2 α + sin2 α = 1, (3.4.38)
O9
the general form of the equation to compute the elements of the inverse of the matrix,
ȳ2 -axis y2 -axis 6
O ∗
Q̂ki = (−1)k+i det Qki , (3.4.39)
6 y2
ȳ1 sin α y2 cos α
the different elements
?
6 ȳ2 α Q22
X11 = = cos α, (3.4.40)
1
y1 sin α
ȳ2 cos α X12 = (−1)3 Q12 = (−1)3 (− sin α) = + sin α, (3.4.41)
: O
ȳ1 -axis X21 = (−1)3 Q21 = (−1)3 (− sin α) = − sin α, (3.4.42)
ȳ1 W
W X22 = Q11 = cos α, (3.4.43)
? α -
y1 y1 -axis and £nally
ȳ2 sin α
¾ -
· ¸
cos α − sin α
ȳ1 cos α X = Q−1 = . (3.4.44)
¾ - sin α cos α
The most important usage of this rotation matrices is the rotation transformation of coordinates. 3.4.7 The Gauss Transformation
For example the rotation transformation in R2 is given by
Let A(m×n) be a real valued matrix, Aik ∈ R. If m > n, then the matrix A is nonsingular w.r.t.
y = Q ȳ, (3.4.33) the columns, i.e. the column vectors are linearly independent. The Gauss transformation is
· ¸ · ¸· ¸ de£ned by
y1 cos α − sin α ȳ1
= ⇒ y = Q ȳ, (3.4.34)
y2 sin α cos α ȳ2 B = AT A , with B ∈ Rn×n , AT ∈ Rn×m , and A ∈ Rm×n . (3.4.46)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
60 Chapter 3. Matrix Calculus 3.4. Linear Mappings of an Af£ne Vector Spaces 61
det B 6= 0. (3.4.49) √ √ q ¡ ¢
= 13 + 14 = 3 3 = tr AT A .
This product was introduced by Gauss in order to compute the so called normal equation. The
matrix A is given by The matrix B = AT A is positive de£nite.
A= a a
1 2 · · · a
n ai = i-th column vector of A, (3.4.50)
→ B = AT A, (3.4.51)
aT1
aT
2
= .. a1 a2 · · · an
,
.
T
an
T
a1 a1 aT1 a2 · · · aT1 an
T ..
a a aT a ··· .
= 2. 1 2. 2 ... .. . (3.4.52)
.. .. .
aTn a1 · · · T
· · · an an n×n
An element Bik of the product matrix is the scalar product of the i-th column vector with the k-th
column vector of A,
Bik = aTi ak . (3.4.53)
The diagonal elements are called the quadratic value of the norm of the column vectors and this
value is always positive (ai 6= 0). The sum, i.e. the trace of the product AT A or the sum of all
A2ik , is the quadratic valued of a matrix norm, called the Euklidian matrix norm N (A),
q sX
N (A) = tr(AT A) = A2ik . (3.4.54)
i,k
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
62 Chapter 3. Matrix Calculus 3.5. Quadratic Forms 63
det A 6= 0 , and A = A . T
(3.5.2) where T is a real nonsingular matrix. Than the matrices B and A are called to be congruent to
each other,
c
the product A ∼ B. (3.5.13)
α := xT y = xT A x. (3.5.3) The congruence transformation bewares the symmetry of the matrix A, because the following
is a real number, α ∈ R, and is called the quadratic form of A. The following conditions hold equation holds,
B = BT . (3.5.14)
α = αT , scalar quantities are invariant w.r.t. to transposition, and (3.5.4)
T T T T T
α=x Ax=α =x A x , because A = A , (3.5.5) 3.5.3 Derivatives of a Quadratic Form
i.e. the matrix A must be symmetric. The scalar quantity α and than the matrix A, too, are called The quadratic form
to be positive de£nite (or negative de£nite), if the following conditions hold, α = xT A x , and AT = A. (3.5.15)
should be partial derived w.r.t. the components of the vector x. The result forms the column
> 0 , for every x 6= 0
matrix ∂α ,
α = xT A x (<) . (3.5.6) ∂x
0
= 0 , iff x = 0
0
.
It is necessary, that the determinant does not equal zero det A 6= 0, i.e. the matrix A must be ∂x .
.
nonsingular. If there exists a vector x 6= 0, such that α = 0, then the form α = xT A x is called = = ei , i-th unit vector, (3.5.16)
∂xi 1
semide£nite. In this case if the matrix A is singular, i.e. det A = 0, then the homogenous system .
..
of equations,
0
¡ ¢
Ax=0 → xT A x = 0 , iff x 6= 0 , and det A = 0 , (3.5.7) and
∂xT £ ¤
or resp. = 0 0 · · · 1 · · · 0 = eTi . (3.5.17)
∂xi
a1 x1 + a2 x2 + . . . + an xn = 0, (3.5.8) With equations (3.5.16) and (3.5.17) the derivative of the quadratic form is given by,
has got only nontrivial solutions, because of the linear dependence of the columns of the matrix
A. The condition xT A x = 0 could only hold, iff the vector is nonequal to zero, x 6= 0, and the ∂α
= eTi A x + xT A ei . (3.5.18)
determinant of the matrix A equals zero, det A = 0. ∂xi
With the symmetry of the matrix A
3.5.2 Congruence Transformation of a Matrix A = AT , (3.5.19)
T
Let α = x A x be a quadratic form, given by the second part of equation (3.5.18) is rewritten as,
¡ T ¢T
α = xT A x , and AT = A. (3.5.9) x A ei = eTi AT x = eTi A x, (3.5.20)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
64 Chapter 3. Matrix Calculus 3.6. Matrix Eigenvalue Problem 65
y 0 = λ 1 x0 , (3.6.3)
y 0 = A x0 = λ 1 x0 , (3.6.4)
(A − λ1) x0 = 0. (3.6.5)
The so called special eigenvalue problem is characterized by the eigenvalues λ distributed only
on the main diagonal. For the homogeneous linear equation system of the x0 exists a trivial
solution x0 = 0. A nontrivial solution exists only if this condition is ful£lled,
This equation is called the characteristic equation, and the left-hand side det (Aλ − 1) is called
the characteristic polynomial. The components of the vector x0 are yet unknown. The vector
x0 could be computed by determing the norm, because the principal axes are searched. Solving
the determinant implies for a matrix with n rows a polynomial of n-th degree. The roots or
sometimes also called the zeros of this equation or polynomial are the eigenvalues.
The £rst and the last coef£cient of the polynomial are given by
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
66 Chapter 3. Matrix Calculus 3.6. Matrix Eigenvalue Problem 67
With the polynomial factorization the equation p (λ) or the polynomial (3.6.7) could be described With A being symmetric
by ¡ ¢T
p (λ) = (λ − λ1 ) (λ − λ2 ) · . . . · (λ − λn ) . (3.6.10) α = xT1 A x2 = xT1 A x2 = xT2 A x1 , and A = AT , (3.6.19)
Comparing this with Newton’s relation for a symmetric polynomial the equations (3.6.8) and
(3.6.9) could be rewritten like this, results
and the eigenvectors x0i . If for example the matrix (Aλ − 1) has the reduction of rank d = 1 and
(1) 3.6.1.0.17 2nd Rule. A real, nonsingular and symmetric square matrix with n rows has exact
the vector x0i is an arbitrary solution of the eigenvalue problem, then the equation,
n real eigenvalues λi , being the roots of its characterstic equation.
(1)
x0i = cx0i , (3.6.14)
3.6.1.0.18 Proof. Let the eigenvalues be complex numbers, given by
with the parameter c represents the general solution of the eigenvalue problem. If the reduction
of rank of the matrix is larger than 1, then there exist d > 1 linearly independent eigenvectors λ1 = β + iγ , and λ2 = β − iγ, (3.6.22)
x1 . As a rule of thumb,
than the eigenvectors are given by
eigenvectors of different eigenvalues are linearly independent.
For a symmetric matrix A the following identities hold. x1 = b + ic , and x2 = b − ic. (3.6.23)
xT1 (A − λ2 1) x2 = 0, (3.6.17)
3.6.2 Rayleigh Quotient
The largest eigenvalue λ1 of a symmetric matrix could be estimated with the Rayleigh quotient.
and £nally equation (3.6.16) subtracted from equation (3.6.17), The special eigenvalue problem y = A x = λx, or
−xT2 A x1 + λ1 xT2 x1 + xT1 A x2 + λ2 xT1 x2 = 0. (3.6.18) (A − λ1) x = 0 , with A = AT , det A 6= 0 , Aij ∈ R, (3.6.27)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
68 Chapter 3. Matrix Calculus 3.6. Matrix Eigenvalue Problem 69
has the n eigenvalues, in order of magnitude, 3.6.3 The General Eigenvalue Problem
The general eigenvalue problem is de£ned by
|λ1 | ≥ |λ2 | ≥ |λ3 | ≥ . . . λn , with λ ∈ R. (3.6.28)
A x = λB x, (3.6.36)
For large matrices A the setting-up and the solution of the characteristic equation is very compli-
(A − λB) x = 0, (3.6.37)
cated. Furthermore for some problems it is suf£cient to know only the largest and/or the smallest
eigenvalue, e.g. for a stability problem only the smallest eigenvalue is of interest, because this is with the matrices A and B being nonsingular. The eigenvalues λ are multiplied with an arbitrary
the critical load. Therefore the so called direct method to compute the approximated eignevalues matrix B and not with the identity matrix 1. This problem is reduced to the special eigenvalue
λ1 by von Mises is interesting. For using this method it is necessary to compute the inverse be- problem by multiplication with the inverse of matrix B form the left-hand side,
fore starting with the actual method to determine the smallest, critical load case. This so called
¡ −1 ¢
von Mises iteration is given by B A − λ1 x = 0, (3.6.38)
z ν = A z ν−1 = Aν z 0 . (3.6.29) ¡ −1 ¢
B A − λ1 x = 0. (3.6.39)
In this iterative process the vector z ν converges to x1 , i.e. the vector converges to the eigen-
Even if the matrices A and B are symmetric, the matrix C = B −1 A is in general a nonsymmetric
value λ1 with the largest absolute value. The starting vector z 0 is represented by the linearly
matrix, because the matrix multiplication is noncommutative.
independent eigenvectors xi ,
z ν → λν1 c1 x1 , (3.6.32) The transformation matrix T is nonsingular, i.e. det T 6= 0, and T ik ∈ R. This implies
z ν+1 → λ1 z ν . (3.6.33) A T x̃ = λT x̃,
The convergence will be better, if the ratio |λ1 | / |λ2 | increases. A very good approximated value The determinant of the inverse of matrix T is given by
Λ1 for the dominant (largest) eigenvalue λ1 is established with the so called Rayleigh quotient,
¡ ¢ 1
det T −1 = , (3.6.44)
zT z zT A z det T
Λ1 = R [z ν ] = ν T ν+1 = ν T ν , with Λ 1 ≤ λ1 . (3.6.35)
zν zν zν zν
and the determinant of the product is split in the product of determinants,
The numerator and the denominator of the Rayleigh quotient include scalar products of the ap- ¡ ¢ ¡ ¢
(ν) det T −1 A T = det T −1 det A det T , (3.6.45)
proximated vectors. For this reason the information of all components q i are used in this
approximation. det à = det A.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
70 Chapter 3. Matrix Calculus 3.6. Matrix Eigenvalue Problem 71
3.6.4.0.19 Rule. The eigenvalues of the matrix A do not change if the matrix is transformed 3.6.6 Cayley-Hamilton Theorem
into the similar matrix Ã,
³ ´ The Cayley-Hamilton Theorem says, that an arbitrary square matrix A satis£es its own charac-
¡ ¢
det T −1 A T − λ1 = det à − λ1 = det (A − λ1) = 0. (3.6.46) teristic equation. If the characteristic polynomial for the matrix A is
p (λ) = det (λ1 − A) , (3.6.57)
3.6.5 Transformation into a Diagonal Matrix = λn + an−1 λn−1 + . . . + a1 λ + a0 , (3.6.58)
The nonsingular symmetric matrix A with n rows contains n linearly independent eigenvec- then the matrix A solves the Cayley-Hamilton equation
tors xi , if and only if for any multiple eigenvalue λσ (, i.e. multiple roots of the characteristic
p (A) = A + an−1 An−1 + . . . + a1 A + a0 1 = 0, (3.6.59)
polynomial) of multiplicity pσ the reduction of rank dσ = pσ (, with σ = 1, 2, . . . , s) for the
characteristic matrix (A − λ1) equals the multiplicity of the multiple eigenvalue. The quantity s and the matrix A to the power n is given by
describes the number of different eigenvalues. The n linearly independent normed eigenvectors An = A A . . . A. (3.6.60)
xi of the matrix A are combined as column vectors to form the nonsingular eigenvector matrix,
n
The matrix with the exponent n, written like A , could be described by a linear combination
X = [x1 , x2 , . . . , xn ] , with det X 6= 0. (3.6.47) of the matrices with the exponents n − 1 up to 0, resp. An−1 till A0 = 1. If the matrix A is
The equation of eigenvalues given by nonsingular, then also negative quantities as exponents are allowed, e.g.
A−1 = p (A) = 0, (3.6.61)
A xi = λ i xi , (3.6.48) µ ¶
1 ¡ ¢
A X = [A x1 , A x2 , . . . , A xn ] = [λ1 x1 , λ2 x2 , . . . , λn xn ] , (3.6.49) − A n−1
+ an−1 A n−2
+ . . . + a1 = 0 , a0 6= 0. (3.6.62)
a0
λ1 0 · · · 0
.. Furthermore the power series P (A) of a matrix A, with the eigenvalues λ σ appearing µσ -times
0 λ2 .
[λ1 x1 , . . . , λn ] = [x1 , x2 , . . . , xn ] . . . . .. , (3.6.50) in the minimal polynomial, converges, if and only if the usual power series converges for all
.. . eigenvalues λσ of the matrix A. For example
0 · · · · · · λn £ A¤ 1 1
[λ1 x1 , . . . , λn ] = X Λ. (3.6.51) e = 1 + A + A 2 + A3 + . . . , (3.6.63)
2! 3!
1 2 1 4
Combining the results implies [cos (A)] = 1 − A + A − + . . . , (3.6.64)
2! 4!
A X = X Λ, (3.6.52) 1 3 1 5
and £nally [sin (A)] = A − A + A − + . . . . (3.6.65)
3! 5!
X −1 A X = X Λ , with det X 6= 0. (3.6.53)
Therefore the diagonal matrix of eigenvalues Λ could be computed by the similarity transforma- 3.6.7 Proof of the Cayley-Hamilton Theorem
tion of the matrix A with the eigenvector matrix X. In the opposite direction a transformation A vector z ∈ Rn is represented by a combination of the linearly independent eigenvectors xi of
matrix T must ful£ll some conditions, in order to transform a matrix A by a similarity transfor- the matrix A similar to a diagonal matrix with n rows,
mation into a diagonal matrix, i.e.
z = c 1 x 1 + c 2 x2 + . . . + c n xn , (3.6.66)
T −1 A T = D = dDi ic, (3.6.54)
with the ci called the evaluation coef£cients. Introducing some basic vectors and matrices, in
or order to establish the evaluation theorem,
AT =T D , with T = [t1 , . . . , tn ] , (3.6.55)
X = [x1 , x2 , . . . , xn ] , (3.6.67)
and £nally
c = [c1 , c2 , . . . , cn ]T , (3.6.68)
A ti = Dii ti . (3.6.56)
X c = z, and (3.6.69)
The column vectors ti of the transformation matrix T are the n linearly independent eigenvectors
of the matrix A with the associated eigenvalues λi = Dii . c = X −1 z. (3.6.70)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
72 Chapter 3. Matrix Calculus 3.6. Matrix Eigenvalue Problem 73
Let z 0 be an arbitrary real vector to start with, and establish some iterated vectors leads to the result
¡ ¢
z1 = A z0, (3.6.71) a0 z 0 + a1 z 1 + . . . + z n = a0 + a1 λ1 + . . . + an−1 λ1n−1 + λn1 c1 x1
¡ ¢
z 2 = A z 1 = A2 z 0 , (3.6.72) + a0 + a1 λ2 + . . . + an−1 λ2n−1 + λn2 c2 x2
... ..
.
¡ ¢
z n = A z n−1 = An z 0 . (3.6.73) + a0 + a1 λn + . . . + an−1 λnn−1 + λnn cn xn . (3.6.82)
The n + 1 vectors z 0 till z n are linearly dependent, because every n + 1 vectors in Rn must be With equations (3.6.71)-(3.6.73),
linearly dependent. The characteristic polynomial of the matrix A is given by
(a0 1 + a1 A + . . . + An ) z 0 = p (λ1 ) c1 x1 + p (λ2 ) c2 x2 + . . . + p (λn ) cn xn ,
p (λ) = det (λ1 − A) , (3.6.74) p (A) z 0 = 0 · c1 x1 + 0 · c2 x2 + . . . + 0 · cn xn ,
= a0 + a1 λ + . . . + an−1 λn−1 + λn . (3.6.75)
and £nally
The relation between the starting vector z 0 and the £rst n iterated vectors z i is given by the
following equations, the evaluation theorem p (A) z 0 = a0 z 0 + a1 z 1 + . . . + z n = 0. (3.6.83)
z 0 = c 1 x 1 + c 2 x2 + . . . + c n xn , (3.6.76)
Inserting the iterated vectors z k = ak z 0 , see equations (3.6.71)-(3.6.73), in equation (3.6.83)
and the eigenvalue problem leads to,
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
74 Chapter 3. Matrix Calculus
Chapter 4
For example S IMMONDS [12], H ALMOS [6], M ATTHEWS [11], and A BRAHAM, M ARSDEN, and
R ATIU [1].
And in german DE B OER [3], S TEIN ET AL . [13], and I BEN [7].
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
78 Chapter 4. Vector and Tensor Algebra 4.1. Index Notation and Basis 79
4.1 Index Notation and Basis If the indices of two terms are written in brackets it is forbidden to sum these terms
3
X
4.1.1 The Summation Convention v(m) g(m) 6= v m gm . (4.1.6)
m=1
For a product the summation convention, invented by Einstein, holds if one index of summation
is a superscript index and the other one is a subscript index. This repeated index implies that the
term is to be summed from i = 1 to i = n in general, 4.1.2 The Kronecker delta
n
X The Kronecker delta is de£ned by
ai bi = a 1 b 1 + a 2 b 2 + . . . + a n b n = a i b i , (4.1.1) (
i=1 1 if i=j
δ ij = δji = δ ij = δij = . (4.1.7)
0 if i 6= j
and for the special case of n = 3 like this,
3
An index i, for example in a 3-dimensional space, is substituted with another index j by multi-
X
j 1 2 3
v gj = v g1 + v g2 + v g3 = v gj , j
(4.1.2) plication with the Kronecker delta,
j=1 3
X
δij vj = δij vj = δi1 v1 + δi2 v2 + δi3 v3 = vi , (4.1.8)
or even for two suf£ces
j=1
3 X
X 3
gik ui v k = g11 u1 v 1 + g12 u1 v 2 + g13 u1 v 3 or with a summation over two indices,
i=1 k=1
3 X
3
+ g21 u2 v 1 + g22 u2 v 2 + g23 u2 v 3 X
δij v i uj = δij v i uj
+ g31 u3 v 1 + g32 u3 v 2 + g33 u3 v 3 = gik ui v k . (4.1.3) i=1 j=1
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
80 Chapter 4. Vector and Tensor Algebra 4.1. Index Notation and Basis 81
For the special case of Cartesian coordinates the Kronecker delta is identi£ed by the unit matrix 4.1.4 The Contravariant Basis and Metric Coef£cients
or identity matrix, Assume a new reciprocal basis to the covariant base vectors gi by introducing the
1 0 0
[δij ] = 0 1 0 . (4.1.12) gk , contravariant base vectors,
0 0 1
in the same space like the covariant base vectors. This contravariant base vectors are de£ned by
4.1.3 The Covariant Basis and Metric Coef£cients (
k k 1 i=k
gi · g = δ i = , (4.1.21)
In an n-dimensional af£ne vector space R naf f ↔ En = V a vector v is given by 0 i 6= k
v = v i gi , with v, gi ∈ V , and i = 1, 2, 3. (4.1.13) and with the covariant coordinates vi the vector v is given by
The vectors gi are choosen as linear independent, i.e. they are a basis. If the index i is an v = vi g i , with v, gi ∈ V , and i = 1, . . . , n. (4.1.22)
subscript index, the de£nitions
For example in the 2-dimensional vector space E2
gi , covariant base vectors, (4.1.14)
and g2 O º g2
gi · gk = gik (4.1.16) z
g1
= gk · gi = gki (4.1.17)
gik = gki (4.1.18)
and these coef£cients are called the Figure 4.1: Example of co- and contravariant base vectors in E 2 .
is nonzero, if and only if the gi form a basis. For the Cartesian basis the metric coef£cients The scalar product of the contravariant base vectors g i ,
vanish except the ones for i = k and the coef£cient matrix becomes the identity matrix or the
Kronecker delta gi · gk = g ik (4.1.27)
(
1 i=k = gk · gi = g ki (4.1.28)
ei · ek = δik = . (4.1.20) ik ki
0 i 6= k g =g (4.1.29)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
82 Chapter 4. Vector and Tensor Algebra 4.1. Index Notation and Basis 83
Lemma 4.1. The covariant base vectors transform with the contravariant metric coef£cients
into the contravariant base vectors.
e2 = e 2 6
gk = g ki gi Raising an index with the contravariant metric coef£cients.
The same argumentation for the covariant metric coef£cients starts with
gk = Akm gm , (4.1.36)
gk · gi = Akm δim , (4.1.37)
-
gki = Aki , (4.1.38)
e1 = e 1
and £nally implies
gk = gki gi . (4.1.39)
ª As a rule of thumb:
e3 = e 3
Lemma 4.2. The contravariant base vectors transform with the covariant metric coef£cients
into the covariant base vectors.
Figure 4.2: Special case of a Cartesian basis. gk = gki gi Lowering an index with the covariant metric coef£cients.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
84 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 85
The Euclidean norm is the connection between elements of the same dimension in a vector space.
vk = gki v i gk = gki gi . (4.1.53) The absolute values of the vectors u and v are represented by
√
|u| = kuk2 = u · u, (4.2.8)
√
|v| = kvk2 = v · v. (4.2.9)
The scalar product or inner product of two vectors in V is a bilinear mapping from two vectors
to α ∈ R.
Theorem 4.1. In the 3-dimensional Euclidean vector space E 3 one important application of the
scalar product is the de£nition of the work as the force times the distance moved in the direction
opposite to the force,
Theorem 4.2. The scalar product in 3-dimensional Euclidean vector space E 3 is written u · v
and is de£ned as the product of the absolute values of the two vectors and the cosine of the angle
between them,
α = u · v := |u| |v| cos ϕ. (4.2.11)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
86 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 87
gi × gj = αgk , (4.2.18)
The quantity |v| cos ϕ represents in the 3-dimensional Euclidean vector space E the projection
3
of the vector v in the direction of vector u. The unit vector in direction of vector u is given by with the conditions
u
eu = . (4.2.12) i 6= j 6= k,
|u|
i, j, k = 1, 2, 3 , or another even permutation of i, j, k.
Therefore the cosine of the angle is given by
u·v 4.2.3 The Permutation Symbol in Cartesian Coordinates
cos ϕ = . (4.2.13)
|u| |v|
The cross products of the Cartesian base vectors ei in the 3-dimensional Euclidean vector space
The absolute value of a vector is its Euclidean norm and is computed by E3 are given by
√ √
|u| = u · u , and |v| = v · v. (4.2.14) e1 × e 2 = e3 = e3 , (4.2.19)
e2 × e 3 = e1 = e1 , (4.2.20)
This formula rewritten with the base vectors gi and gi simpli£es in index notation to
e3 × e 1 = e2 = e2 , (4.2.21)
q
p p e2 × e 1 = −e3 = −e3 , (4.2.22)
|u| = ui gi · uk gk = ui uk δik = ui ui , (4.2.15)
p e3 × e 2 = −e1 = −e1 , (4.2.23)
|v| = v i vi . (4.2.16) e1 × e 3 = −e2 = −e2 . (4.2.24)
The cosine between two vectors in the 3-dimensional Euclidean vector space E 3 is de£ned by The Cartesian components of a permutation tensor , or just the permutation symbols, are de£ned
i
u vi ui v i by
cosϕ = p √ =p √ . (4.2.17)
uj uj v k v k uj uj v k v k +1 if (i, j, k) is an even permutation of (1, 2, 3),
eijk = −1 if (i, j, k) is an odd permutation of (1, 2, 3), (4.2.25)
For example the scalar product of two vectors w.r.t. the Cartesian basis g i = ei = ei in the
0 if two or more indices are equal.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
88 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 89
Thus, returning to equations (4.2.19)-(4.2.24), the cross products of the Cartesian base vectors 4.2.5 Introduction of the Determinant with the Permutation Symbol
could be described by the permutation symbols like this,
The scalar quantity α in the section above could also be described by the square root of the
ei × ej = eijk e . k
(4.2.26) determinant of the covariant metric coef£cients,
1 √
α = (det gij ) 2 = g. (4.2.34)
For example
The determinant of a 3 × 3 matrix could be represented by the permutations symbols e ijk ,
e1 × e2 = e121 · e1 + e122 · e2 + e123 · e3 = 0 · e1 + 0 · e2 + 1 · e3 = e3 ¯ ¯
¯a11 a12 a13 ¯
e1 × e3 = e131 · e1 + e132 · e2 + e133 · e3 = 0 · e1 + (−1) · e2 + 0 · e3 = −e2 . ¯ ¯
α = det [amn ] = ¯¯a21 a22 a23 ¯¯ = a1i · a2j · a3k · eijk . (4.2.35)
¯a31 a32 a33 ¯
4.2.4 De£nition of the Scalar Triple Product of Base Vectors
Computing the determinant by expanding about the £rst row implies
Starting again with the cross product of base vectors, see equation (4.2.18), ¯ ¯ ¯ ¯ ¯ ¯
¯a a ¯ ¯a a ¯ ¯a a ¯
α = a11 · ¯¯ 22 23 ¯¯ − a12 · ¯¯ 21 23 ¯¯ + a13 · ¯¯ 21 22 ¯¯
gi × gj = αgk , (4.2.27) a32 a33 a31 a33 a31 a32
i 6= j 6= k, and £nally
i, j, k = 1, 2, 3 , or another even permutation of i, j, k.
α = a11 · a22 · a33 − a11 · a32 · a23
The gk are the contravariant base vectors and the scalar quantity α is computed by multiplication − a12 · a21 · a33 + a12 · a31 · a23
of equation (4.2.18) with the covariant base vector gk ,
+ a13 · a21 · a32 − a13 · a31 · a22 . (4.2.36)
k
(gi × gj ) · gk = αg · gk , (4.2.28) The alternative way with the permutation symbol is given by
[g1 , g2 , g3 ] = αδkk = 3α. (4.2.29)
α = a11 · a22 · a33 · e123 + a11 · a23 · a32 · e132
This result is the so called scalar triple product of the base vectors + a12 · a23 · a31 · e231 + a12 · a21 · a33 · e213
+ a13 · a21 · a32 · e312 + a13 · a22 · a31 · e321 ,
α = [g1 , g2 , g3 ] . (4.2.30)
and after inserting the values of the various permutation symbols,
This scalar triple product α of the base vectors gi for i = 1, 2, 3 represents the volume of the
parallelepiped formed by the three vectors gi for i = 1, 2, 3. Comparing equations (4.2.28) and = a11 · a22 · a33 · 1 + a11 · a23 · a32 · (−1)
(4.2.29) implies for contravariant base vectors + a12 · a23 · a31 · 1 + a12 · a21 · a33 · (−1)
+ a13 · a21 · a32 · 1 + a13 · a22 · a31 · (−1)
gi × g j
gk = , (4.2.31)
[g1 , g2 , g3 ] and £nally the result is equal to the £rst way of computing the determinant, see equation (4.2.36),
and for covariant base vectors
α = a11 · a22 · a33 − a11 · a23 · a32
gi × gj
gk = . (4.2.32) + a12 · a23 · a31 − a12 · a21 · a33
[g1 , g2 , g3 ]
+ a13 · a21 · a32 − a13 · a22 · a31 . (4.2.37)
Furthermore the scalar product of two scalar triple products of base vectors is given by
Equations (4.2.35) can be written with contravariant elements, too,
£ ¤ 1
[g1 , g2 , g3 ] · g1 , g2 , g3 = α · = 1. (4.2.33)
α α∗ = a1i · a2j · a3k · eijk . (4.2.38)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
90 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 91
The matrix of the covariant metric coef£cients is the inverse of the matrix of the contravariant and
metric coef£cients and vice versa,
[d, e, f ] = (d × e) · f . (4.2.51)
gij · g jk = δik , (4.2.39)
£ jk
¤ ¡ £ ¤¢ £ ¤
det gij · g = det [gij ] g jk = det δik = 1. (4.2.40) and the £rst one of this scalar triple product is given by
a × b = ai gi × bj gj = ai bj eijk [g1 , g2 , g3 ] gk , The element (1, 1) of the product matrix A B with respect to the product rule of determinants
¯ 1 2 3¯ ¯ 1 2 3¯
¯a a a ¯ ¯g g g ¯ det A det B = det (A B) is given by
¯ 1 2 3¯ ¯ ¯
a × b = [g1 , g2 , g3 ] ¯¯ b b b ¯¯ = [g1 , g2 , g3 ] ¯¯ a1 a2 a3 ¯¯ . (4.2.49)
¯g1 g2 g3 ¯ ¯ b1 b2 b 3 ¯ a1 d 1 + a 2 d 2 + a 3 d 3 = a i g i · d j g j
= ai dj δij
Two scalar triple products are de£ned by
= ai di
[a, b, c] = (a × b) · c, (4.2.50) = a · d. (4.2.56)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
92 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 93
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
94 Chapter 4. Vector and Tensor Algebra 4.2. Products of Vectors 95
After expanding the determinant and setting k = n, The proof that the dyadic product is a tensor starts with the assumption, see equations (T4) and
ijk i j
(T5),
ε εlmk = δli δm
j
− δm δl , (4.2.77) (a ⊗ b) (αu + βv) = α (a ⊗ b) · u + β (a ⊗ b) · v. (4.2.86)
and if i = l and j = m ∗
Equation (4.2.86) rewritten with the mapping T is given by
εijk εijn = 2δnk , (4.2.78)
∗ ∗ ∗
and if all three indices are equal T (αu + βv) = αT (u) + β T (v) . (4.2.87)
ijk ijk
ε εijk = e eijk = 2δkk = 6. (4.2.79) ∗
With the de£nitions of the mapping T it follows, that
4.2.9 The Dyadic Product or the Direct Product of Vectors ∗
T (αu + βv) = (a ⊗ b) (αu + βv) = a [b · (αu + βv)]
The dyadic product of two vectors a and b ∈ V de£nes a so called simple second order tensor = a [αb · u + βb · v]
with rank = 1 in the tensor space V ⊗ V∗ over the vector space V by
= α [a (b · u)] + β [a (b · v)]
∗ ∗ ∗ ∗
T=a⊗b , and T ∈ V ⊗ V∗ . (4.2.80) = αT (u) + β T (v) .
This tensor describes a linear mapping of the vector v ∈ V with the scalar product by The vectors a and b are represented by the base vectors gi (covariant) and gj (contravariant)
∗
T · v = (a ⊗ b) · v = a (b · v) . (4.2.81) a = a i gi , b = bj g j , and gi , gj ∈ V. (4.2.88)
The dyadic product a ⊗ b could be represented by a matrix, for example with a, b ∈ R and 3
∗
T ∈ R 3 ⊗ R3 , The dyadic product i.e. the mapping T is de£ned by
∗
a1 ∗
∗
T
T = a b = a2
£ ¤
b1 b2 b3 1×3 (4.2.82) T = a ⊗ b = ai bj gi ⊗ gj = T ij gi ⊗ gj , (4.2.89)
a3 3×1
and by
a 1 b 1 a1 b2 a1 b 3 gi ⊗ g j the dyadic product of the base vectors,
= a2 b 1 a2 b2 a2 b 3 . (4.2.83)
with the conditions
a3 b1 a3 b2 a3 b3 3×3
∗
µ ∗ ¶
∗ ∗ det T ij = 0 , r T ij = 1 , and rank = 1. (4.2.90)
The rank of this mapping is rank = 1, i.e. det T = 0 and det T i = 0 for i = 1, 2, 3. The
(3×3) (2×2)
∗ ∗ ∗
mapping T v denotes in matrix notation The mapping T maps the vector v = v k gk onto the vector w,
3
P ∗ ∗ ¡
∗ ¢
a1 i=1 bi vi w = T · v = T ij gi ⊗ gj · v k gk
a 1 b 1 a1 b 2 a1 b 3 v1
P 3
∗ ¡ ¢
a 2 b 1 a2 b 2 a2 b 3 v 2 = a
2 b v
i i ,
(4.2.84) = T ij v k gi gj · gk
a 3 b 1 a3 b 2 a3 b 3 v3 i=1
3 ∗
P
a3 bi v i = T ij v k gi δ jk ,
i=1 ∗ ∗
∗
or w = T ij v j gi = wi gi . (4.2.91)
¡ ¢ ∗ ¡ ¢
a bT v = T v = a bT v . (4.2.85)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
96 Chapter 4. Vector and Tensor Algebra 4.3. Tensors 97
4.3 Tensors
dF(n) = t(n) dA(n)
4.3.1 Introduction of a Second Order Tensor x3 6
±
n
With the de£nition of linear mappings f in chapter (2.8), the de£niton of vector spaces of linear ¸ n(1)
mappings L over the vector space V, and the de£nition of dyads it is possible to de£ne the second
dF (2) (2)
= t dA(2) µ
order tensor. The original de£nition of a tensor was the description of a stress state at a point
and time in a continuum, e.g. a ¤uid or a solid, given by Cauchy. The stress tensor or the Cauchy I dF(1) = t(1) dA(1)
stress tensor T at a point P assigns a stress vector σ (P ) to an arbitrarily oriented section, 1
given by a normal vector at the point P . The resulting stress vector t (n) (P ) at an arbitrarily dA(1)
n(2) ¾
-
6 (n) x2
t (P )
dA(2) dA(n)
µn
dF(3) = t(3) dA(3)
P R
dA(3) ?
x1 n(3)
ª
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
98 Chapter 4. Vector and Tensor Algebra 4.3. Tensors 99
4.3.2.0.21 Linearity for the Vectors. 4.3.3 The Complete Second Order Tensor
1 . Axiom of Second Order Tensors. The tensor (the linear mapping) T ∈ V ⊗ V maps the The simple second order tensor T ∈ V ⊗ V is de£ned as a linear combination of n dyads and its
vector u ∈ V onto the same space V, rank is n,
T (u) = T · u = Tu = v ; ∀u ∈ V ; v ∈ V. (T1) n n
X ∗ X
This mapping is the same like the mapping of a vector with a quadratic matrix in a space with T= Ti = ai ⊗ b i , (4.3.2)
i=1 i=1
Cartesian coordinates.
T = a ⊗ bi = ai ⊗ bi ,
i
(4.3.3)
2 . Axiom of Second Order Tensors. The action of the zero tensor 0 on any vector u maps the
vector on the zero vector, and det T 6= 0 , if the vectors ai and bi are linearly independent.
0 · u = 0u = 0 ; u, 0 ∈ V. (T2)
If the vectors ai and bi are represented with the base vectors gi and gi ∈ V like this,
3 . Axiom of Second Order Tensors. The unit tensor 1 sends any vector into itself,
1 · u = 1u = u ; u ∈ V, 1 ∈ V ⊗ V. (T3) ai = aij gj ; bi = bil gl , (4.3.4)
4 . Axiom of Second Order Tensors. The multiplication by a tensor is distributive with respect then the second order tensor is given by
to vector addition,
T (u + v) = Tu + Tv ; ∀u, v ∈ V. (T4) T = aij bil gj ⊗ gl , (4.3.5)
5 . Axiom of Second Order Tensors. If the vector u is multiplied by a scalar, then the linear
mapping is denoted by and £nally the complete second order tensor is given by
T (αu) = αTu ; ∀u ∈ V, α ∈ R . (T5)
T = T jl gj ⊗ gl the mixed formulation of a second order tensor. (4.3.6)
4.3.2.0.22 Linearity for the Tensors.
6 . Axiom of Second Order Tensors. The multiplication with the sum of tensors of the same The dyadic product of the base vectors includes one co- and one contravariant base vector. The
space is distributive, mixed components T jl of the tensor in mixed formulation are written with one co- and one con-
travariant index, too,
(T1 + T2 ) · u = T1 · u + T2 · u ; ∀u ∈ V, T1 , T2 ∈ V ⊗ V. (T6) det T jl 6= 0. (4.3.7)
7 . Axiom of Second Order Tensors. The multiplication of a tensor by a scalar is linear, like If the contravariant base vector is transformed with the metric coef£cient ,
in equation (T5) the multiplication of a vector by a scalar,
gl = g lk gk , (4.3.8)
(αT) · u = T · (αu) ; ∀u ∈ V, α ∈ R . (T7)
8 . Axiom of Second Order Tensors. The action of tensors on a vector is associative, the tensor T changes, like
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
100 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 101
w = T · v = T ij (gi ⊗ gj ) · v k gk (4.3.16) The relation between the two covariant base vectors gi and gi could be written with a second
ij k ij order tensor like
= T v gi gjk = T vj gi ,
g i = A · gi . (4.4.2)
w = w i gi and wi = T ij vj . (4.3.17)
If this linear mapping exists, then the coef£cients of the transformation tensor A are given by
In the same way the other representation of the vector w is given by ¡ ¢
¡ ¢ gi = 1gi = gk ⊗ gk gi (4.4.3)
w = T · v = T i j g i ⊗ gj · v k g k
¡ k ¢
(4.3.18) = g · gi gk = Aki gk ,
j j
= Ti vk gi δjk = i
T i vj g , gi = Aki gk , and Aki = gk gi . (4.4.4)
j
w= wi gi , and w i = T i vj (4.3.19)
The complete tensor A in the mixed formulation is then de£ned by
¡ ¢
A = gk · gi gk ⊗ gi = Aki gk ⊗ gi . (4.4.5)
Insert equation (4.4.5) in (4.4.2), in order to get the transformation (4.4.4) again,
¡ ¢
gm = Aki gk ⊗ gi gm = Aki δm i
gk = Akm gk . (4.4.6)
gi = Agi . (4.4.7)
This existence results out of its linear independence. The "retransformation"tensor A is again
de£ned by the multiplication with the unit tensor 1
¡ ¢
gi = 1gi = gk ⊗ gk gi (4.4.8)
¡ k ¢ k
= g gi g k = A i g k ,
k k
gi = A i g k , and A i = gk gi . (4.4.9)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
102 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 103
Inserting equation (4.4.10) in (4.4.7) implies again the transformation relation (4.4.9), 4.4.2 Collection of Transformations of Basis
³ k ´ k i k There is a large number of transformation relations between the co- and contravariant basis of
gm = A i g k ⊗ g i g m = A i δ m gk = A m gk . (4.4.11)
both systems of coordinates. Transformation from the "normal basis"to the "overlined basis",
like gi à gi and gk à gk , is given by the following equations. First the relation between the
The tensor A is the inverse to A and vice versa. This is a result of equation (4.4.7) , covariant base vectors in both systems of coordinates is de£ned by
−1 ¡ ¢ ¡ ¢
A ·| gi = A · g i , (4.4.12) gi = 1gi = gk ⊗ gk gi = gk · gi gk = Aki gk . (4.4.22)
−1
à A · gi = g i . (4.4.13)
With this relationship the transformed (overlined) covariant base vectors are represented by the
covariant base vectors
Comparing this with equation (4.4.2) implies
¡ ¢
−1 gi = Agi , and A = gk · gm gk ⊗ gm = Akm gk ⊗ gm , (4.4.23)
A=A Ã A · A = 1, (4.4.14) ¡ k ¢ k
g i = g · g i gk = A i gk , (4.4.24)
and in the same way
and the transformed (overlined) covariant base vectors are represented by the contravariant base
−1
vectors,
A=A Ã A · A = 1. (4.4.15)
gi = Agi , and A = (gk ,gm ) gk ⊗ gm = Akm gk ⊗ gm , (4.4.25)
In index notation with equations (4.4.4) and (4.4.9) the relation between the "normal"and the
k k
"overlined"coef£cients of the transformation tensor is given by gi = (gk · gi ) g = Aki g . (4.4.26)
m The relation between the contravariant base vectors in both systems of coordinates is de£ned by
gi = Aki gk = Aki A k gm | ·gj , (4.4.16)
m j
δij = Aki A k δm , ¡ ¢ ¡ ¢
gi = 1gi = gk ⊗ gk gi = gk · gi gk = Bki gk . (4.4.27)
k j
δij = A iA k . (4.4.17)
With this relationship the transformed (overlined) contravariant base vectors are represented by
The transformation of contravariant basis works in the same way. If in equations (4.4.3) or the contravariant base vectors
(4.4.8) the metric tensor of covariant coef£cients is used in stead of the identity tensor, then
another representation of the transformation tensor is described by gi = Bgi , and B = (gk · gm ) gk ⊗ gm = Bkm gk ⊗ gm , (4.4.28)
¡ ¢
¡ ¢ gi = gk · gi gk = Bki gk , (4.4.29)
gm = 1gm = gik gi ⊗ gk gi (4.4.18)
= gik Akm gi , and the transformed (overlined) contravariant base vectors are represented by the covariant base
vectors,
gm = Aim gi , and Aim = gik Akm . (4.4.19)
¡ ¢
gi = Bgi , and B = gk · gm gk ⊗ gm = B km gk ⊗ gm , (4.4.30)
If the transformed covariant base vectors g m should be represented by the contravariant base ¡ ¢
vectors gi , then the complete tensor of transformation is given by gi = gk · gi gk = B ki gk . (4.4.31)
A = (gi · gk ) gi ⊗ gk = Aik gi ⊗ gk . (4.4.20) The inverse relation gi à gi , and gk à gk representing the "retransformations"from the trans-
formed (overlined) to the "normal"system of coordinates are given by the following equations.
The inverse transformation tensor T is given by an equation developed in the same way like the The inverse transformation between the covariant base vectors of both systems of coordinates is
equations (4.4.16) and (4.4.8). This inverse tensor is denoted and de£ned by denoted and de£ned by
¡ ¢ ¡ ¢ k
A = A−1 = (gi · gk ) gi ⊗ gk = Aik gi ⊗ gk . (4.4.21) gi = 1gi = gk ⊗ gk · gi = gk · gi gk = Ai gk . (4.4.32)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
104 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 105
With this relationship the covariant base vectors are represented by the transformed (overlined) 4.4.3 The Tensor Product of Second Order Tensors
covariant base vectors,
¡ ¢ The vector v is de£ned by the action of the linear mapping given by T on the vector u,
k
gi = Agi , and A = gk · gm gk ⊗ gm = Am gk ⊗ gm , (4.4.33)
¡ k ¢ v = T · u = Tu , with u, v ∈ V , and T ∈ V ⊗ V∗ . (4.4.48)
k
gi = g · g i g k = A i g k , (4.4.34)
In index notation with the covariant base vectors gi ∈ V equation (4.4.48) denotes like that
and the covariant base vectors are represented by the transformed (overlined) contravariant base ¡ ¢
v = T mk gm ⊗ gk (ur gr )
vectors,
= T mk ur (gk · gr ) gm ,
i m k m k
gi = Ag , and A = (gm · gk ) g ⊗ g = Amk g ⊗ g , (4.4.35)
and with lowering an index, see (4.1.39)
gi = (gk · gi ) gk = Aki gk . (4.4.36)
The inverse relation between the contravariant base vectors in both systems of coordinates is = T mk ur gkr gm
de£ned by v = T mk uk gm = v m gm . (4.4.49)
¡ ¢ ¡ ¢ i
gi = 1gi = gk ⊗ gk gi = gk · gi gk = B k gk . (4.4.37) Furthermore with the linear mapping,
With this relationship the contravariant base vectors are represented by the transformed (over-
w = Sv , with w ∈ V , and S ∈ V ⊗ V∗ , (4.4.50)
lined) contravariant base vectors,
m and the linear mapping (4.4.48) the associative law for the linear mappings holds
gi = Bgi , and B = (gk · gm ) gk ⊗ gm = B k gk ⊗ gm , (4.4.38)
¡ ¢ i
gi = gk · gi gk = B k gk , (4.4.39) w = Sv = S (T · u) = (ST) u. (4.4.51)
The second linear mapping w = Sv in the mixed formulation with the contravariant base vectors
and the contravariant base vectors are represented by the transformed (overlined) covariant base
gi ∈ V is given by
vectors,
S = Sji gi ⊗ gj . (4.4.52)
¡ ¢ km
gi = Bgi , and B = gk · gm gk ⊗ gm = B gk ⊗ gm , (4.4.40) Then the vector w in index notation with the results of the equations (4.4.49), (4.4.51) and
¡ ¢ ki (4.4.52) is rewritten as
gi = gk · gi gk = B gk . (4.4.41) ¡ ¢¡ ¢
w = S (Tu) = Sji gi ⊗ gj T mk uk gm
There exist the following relations between the transformation tensors A and A,
= Sji T mk uk δm
j
gi
AA = 1 , or AA = 1 (4.4.42)
k m
= Sji T jk uk gi = wi gi ,
Ami A m = δik , i.e. A i Akm = δik (4.4.43)
and the coef£cients of the vector are given by
and for the inverse transformation tensors B and B
wi = Sji T jk uk . (4.4.53)
BB = 1 , or BB = 1 (4.4.44)
For the second order tensor product ST exists in general four representations with all possible
m i
Bmi B k = δki , i.e. B m Bkm = δki . (4.4.45) combinations of base vectors
i
Furthermore exists a relation between the transformation tensors A and the retransformation S · T = Sm T mk gi ⊗ gk covariant basis, (4.4.54)
tensor B S·T= Sim Tkm gi ⊗ gk contravariant basis, (4.4.55)
k
Ami Bmk = δik ; Bmk = A m , (4.4.46) S·T= S im Tmk gi ⊗ gk mixed basis, (4.4.56)
and a relation between the transformation tensors B and the retransformation tensor A, and
k
Bmk = A m. (4.4.47) S · T = Sim T mk gi ⊗ gk mixed basis. (4.4.57)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
106 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 107
Lemma 4.4. The result of the tensor product of two dyads is the scalar product of the inner Distributive law for the tensor product:
vectors of the dyads and the dyadic product of the outer vectors of the dyads. The tensor product
of two dyads of vectors is denoted by (R + S) T = RT + ST (4.4.67)
(a ⊗ b) (c ⊗ d) = (b · c) a ⊗ d. (4.4.58)
In general cases NO commutative law:
With this rule the index notation of equations (4.4.53) up to (4.4.57) is easily computed, for
example equation (4.4.53) or (4.4.54) implies ST 6= TS (4.4.68)
¡ i ¢¡ ¢
ST = Sm gi ⊗ gm T nk gn ⊗ gk
Transpose of a tensor product:
i
= Sm T nk δnm gi ⊗ gk ,
(ST)T = TT ST (4.4.69)
and £nally
i
ST = Sm T mk gi ⊗ gk . (4.4.59) Inverse of a tensor product:
The "multiplication"or composition of two linear mappings S and T is called a tensor product (ST)−1 = T−1 S−1 if S and T are nonsingular. (4.4.70)
P = S · T = ST (tensor · tensor = tensor). (4.4.60)
Determinant of a tensor product:
The linear mappings w = Sv and v = Tu with the vectors u, v, and w ∈ V are composed like
det (ST) = det S det T (4.4.71)
w = Sv = S (Tu) = (ST) u = STu = Pu. (4.4.61)
This "multiplication"is, like in matrix calculus, but not like in "normal"algebra (a · b = b · a), Trace of a tensor product:
noncommutative, i.e.
ST 6= TS. (4.4.62) tr(ST) = tr(TS) (4.4.72)
For the three second order tensors R, S, T ∈ V ⊗ V∗ and the scalar quantity α ∈ R the following Proof of equation (4.4.63).
identities for tensor products hold.
α (ST) = (αS) T = S (αT) ,
Multiplication by a scalar quantity:
with the assumption
α (ST) = (αS) T = S (αT) (4.4.63)
(αS) v = α (Sv) , with v ∈ V, (4.4.73)
Multiplication by the identity tensor:
with this
1T = T1 = T (4.4.64)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
108 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 109
1T = T1 = T , with 1 ∈ V ⊗ V, (R + S) T = RT + ST,
with the assumption of equation (4.4.61), with the well known condition for a linear mapping,
(R + S) v = Rv + Sv, (4.4.80)
S (Tv) = (ST) v, (4.4.75)
with this and equation (4.4.61),
and the identity
[(R + S) T] v = (R + S) (Tv) = R (Tv) + S (Tv) = (RT) v + (ST) v,
1v = v, (4.4.76)
and £nally
this implies
(R + S) T = RT + ST. (4.4.81)
(1T) v = 1 (Tv) = Tv, (4.4.82)
(T1) v = T (1v) = Tv,
1T = T1 = T. (4.4.77) (ST)T = TT ST ,
[(RS) T] v = (RS) (Tv) = (RS) w , with v, w ∈ V, (4.4.78) and this equation only holds, if
³ ´T ¡ ¢T
with equation (4.4.61) again, (ST)T = TT ST = ST. (4.4.85)
(RS) T = R (ST) . (4.4.79) and if the inverses T−1 and S−1 exist, then
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
110 Chapter 4. Vector and Tensor Algebra 4.4. Transformations and Products of Tensors 111
with equation (4.4.88) inserted in (4.4.89) and comparing with equation (4.4.87), Absolute value or norm of a tensor:
p
T (ST)−1 = S−1 , (4.4.90) |T| = tr (TTT ) (4.4.100)
£ ¤
T −1
T (ST)−1 = T−1 S−1 , (4.4.91) The Schwarz inequality:
and with equation (4.4.61) and (4.4.90), |TS| ≤ |T| |S| (4.4.101)
−1
£ −1 ¤ ¡ ¢ −1 −1 For the norms of tensors, like for the norms of vectors, the following identities hold,
T T (ST) = T−1 T (ST) = 1 (ST) , (4.4.92)
|αT| = |α| |T| , (4.4.102)
and £nally comparing this with equation (4.4.91)
|T + S| ≤ |T| + |S| . (4.4.103)
(ST)−1 = T−1 S−1 . (4.4.93) And as a rule of thumb:
Lemma 4.5. The result of the scalar product of two dyads is the scalar product of the £rst vectors
of each dyad and the scalar product of the second vectors of each dyad. The scalar product of
two dyads of vectors is denoted by
4.4.4 The Scalar Product or Inner Product of Tensors
(a ⊗ b) : (c ⊗ d) = (a · c) (b · d) . (4.4.104)
The scalar product of tensors is de£ned by
With this rule the index notation of equation (4.4.104) implies for example
T : (v ⊗ w) = vTw , with v, w ∈ V , and T ∈ V ⊗ V∗ . (4.4.94) ¡ ¢ ¡
S : T = S im gi ⊗ gm : Tnk gn ⊗ gk
¢
For the three second order tensors R, S, T ∈ V ⊗ V∗ and the scalar quantity α ∈ R the following = S im Tnk δin δm
k
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
112 Chapter 4. Vector and Tensor Algebra 4.5. Special Tensors and Operators 113
It is not absolutely correct to speak about the determinant of a tensor, because it is only the The trace of a product of two tensors S and T is de£ned by
determinant of the coef£cients of the tensor in Cartesian coordinates 1 and not of the whole tensor
¡ ¢
itself. For the different notations of a tensor with covariant, contravariant and mixed coef£cients tr STT = S : T (4.5.12)
the determinant is given by
£ ¤ £ ¤ £ ¤ and easy to proof just writting it in index notation. Starting with this some more important
det T = det T ij = det [Tij ] = det T ij = det Ti j . (4.5.1) identities could be found,
Expanding the determinant of the coef£cient matrix of a tensor T works just the same as for any
tr T = tr TT , (4.5.13)
other matrix. For example the determinant could be described with the permutation symbol ε,
like in equation (4.2.35) tr (ST) = tr (TS) , (4.5.14)
¯ ¯ tr (RST) = tr (TRS) = tr (STR) , (4.5.15)
¯T11 T12 T13 ¯
¯ ¯ tr [T (R + S)] = tr (TR) + tr (TS) , (4.5.16)
det T = det [Tmn ] = ¯T21 T22 T23 ¯¯ = T1i · T2j · T3k · εijk .
¯ (4.5.2)
¯T31 T32 T33 ¯ tr [(αS) T] = α tr (ST) , (4.5.17)
(
¡ ¢ > 0 , if T 6= 0,
Some important identities are given without a proof by T : T = tr TTT , i.e. the tensor T is positive de£nite (4.5.18)
= 0 , iff T = 0,
p
det (αT) = α3 det T, (4.5.3) |T| = tr (TTT ) the absolute value of a tensor T, (4.5.19)
det (TS) = det T det S, (4.5.4)
det TT = det T, (4.5.5) and £nally the inequality
2
(det Q) = 1 , if Q is an orthogonal tensor, (4.5.6)
|S : T| ≤ |S| |T| . (4.5.20)
−1 −1 −1
det T = (det T) if T exists. (4.5.7)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
114 Chapter 4. Vector and Tensor Algebra 4.5. Special Tensors and Operators 115
4.5.4 The Transpose of a Tensor 4.5.5 The Symmetric and Antisymmetric (Skew) Tensor
T
The transpose T of a second order tensor T is de£ned by There are a lot of different notations for the symmetric part of a tensor T, for example
are equal, q.e.d. For the transpose of a tensor the following identities hold, T ij = T ji , if T is symmetric, (4.5.45)
T ij = −T ji , if T is antisymmetric. (4.5.46)
(a ⊗ b)T = (b ⊗ a), (4.5.29)
T T
(T ) = T, (4.5.30) Any second rank tensor can be written as a sum of a symmetric tensor and an antisymmetric
T tensor,
1 = 1, (4.5.31)
(S + T)T = ST + TT , (4.5.32) 1 1
T = TS + TA = TTsym + TTasym = sym T + skew T = (T + TT ) + (T − TT ). (4.5.47)
T T 2 2
(αT) = αT , (4.5.33)
T
(S · T) = T · S . T T
(4.5.34) The symmetric part of a tensor is de£ned by
and the relations between the tensor components, 4.5.6 The Inverse of a Tensor
¡ ij ¢T The inverse of a tensor T exists, if for any two vectors v and w the expression,
T = T ji , or (Tij )T = Tji , (4.5.39)
w = Tv, (4.5.50)
and
¡ ¢T ¡ ¢T could be transformed in
T ij = Tj i , or Ti j = T ji . (4.5.40) v = T−1 w. (4.5.51)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
116 Chapter 4. Vector and Tensor Algebra 4.5. Special Tensors and Operators 117
Comparing these two equations gives 4.5.8 The Polar Decomposition of a Tensor
¡ ¢
−1 −1
TT−1 = T−1 T = 1 , and T = T. (4.5.52) The polar decomposition of an nonsingular second order tensor T is given by
T−1 exists, if and only if det T 6= 0. (4.5.53) In the polar decomposition the tensor R = Q is chosen as an orthogonal tensor, i.e. R T = R−1
and det R = ±1. In this case the tensors U and V are positive de£nite and symmetric tensors.
The inverse of a product of a scalar and a tensor is de£ned by The tensor U is named the right-hand Cauchy strain tensor and V is named the left-hand Cauchy
1 −1 strain tensor. Both describe the strains, e.g. if a ball (a circle) is deformed in an ellipsoid (an
(αT)−1 = T , (4.5.54) ellipse), on the opposite R represents a rotation. The £gure (4.6) implies the de£nition of the
α
and the inverse of a product of two tensors is de£ned by
The scalar product of two vectors equals the scalar product of their orthogonal mappings. And vectors
for the square value of a vector and its orthogonal mapping equation (4.5.59) denotes
dz = R · dX, (4.5.63)
(Qv)2 = v2 . (4.5.60)
Sometimes even equation (4.5.59) and not (4.5.56) is used as de£nition of an orthogonal tensor. and
The orthogonal tensor Q describes a rotation. For the special case of the Cartesian basis the
components of the orthogonal tensor Q are given by the cosine of the rotation angle dx = V · dz. (4.5.64)
Q = qik ei ⊗ ek ; qik = cos (^ (ei ; ēk )) and det Q = ±1. (4.5.61) The composition of this two linear mappings is given by
If det Q = +1, then the tensor is called proper orthogonal tensor or a rotator. dx = V · R · dX = F · dX, (4.5.65)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
118 Chapter 4. Vector and Tensor Algebra 4.5. Special Tensors and Operators 119
The other possible way to describe the composition is with the vector d z,
∗ t3
da3 6 µ
∗
dx = R · dz, (4.5.66) g3 6
g2
and *
∗
dz = U · dX, (4.5.67)
and £nally
^
dx = R · U · dX = F · dX, (4.5.68) g1
The composed tensor F is called the deformation gradient and its polar decomposition is given
by
T ≡ F = R · U = V · R. (4.5.69)
Figure 4.7: An example of the physical components of a second order tensor.
4.5.9 The Physical Components of a Tensor
∗
In general a tensor w.r.t. the covariant basis gi is given by The de£nition of the physical stresses τ ik is given by
T = T ik gi ⊗ gk . (4.5.70) ∗ gk ¯¯ ¯
df i = τ ik da(i) ¯ ,
∗ |gk |
The physical components T ik are de£ned by ∗ gk p
df i = τ ik √ da(i) g (i)(i) . (4.5.77)
∗ gi gk g(k)(k)
T = T ik ⊗
|gi | |gk | Comparing equations (4.5.74) and (4.5.77) implies
∗
T ik à p !
=√ √ gi ⊗ g k , (4.5.71) ik ∗ ik g (i)(i) p
g(i)(i) g(k)(k) τ −τ √ da(i) g (i)(i) = 1,
∗ g(k)(k)
√ √
T ik = T ik g(i)(i) g(k)(k) . (4.5.72)
∗
ik
The stress tensor T = T gi ⊗ gk is given w.r.t. to the bais gi . Then the associated stress vector and £nally the de£nition for the physical components of the stress tensor τ ik is given by
ti w.r.t. a point in the sectional area dai is de£ned by √
∗ g(k)(k)
df i τ ik = τ ik p . (4.5.78)
ti = ; df i = ti da(i) , (4.5.73) g (i)(i)
da(i)
ti = τ ik gk ; df i = τ ik gk da(i) , (4.5.74) 4.5.10 The Isotropic Tensor
i
with the differential force df . Furthermore the sectional area and its absolute value are given by An isotropic tensor is a tensor, which has in every rotated coordinate system, if it is a Cartesian
coordinate system the same components. Every tensor with order zero, i.e. a scalar quantity, is
dai = da(i) g(i) , (4.5.75) an isotropic tensor, but no £rst order tensor, i.e. a vector, could be isotropic. The unique isotropic
second order tensor is the Kronecker delta, see section (4.1).
and
¯ ¯
|dai | = da(i) ¯g(i) ¯ ,
p
|dai | = da(i) g (i)(i) . (4.5.76)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
120 Chapter 4. Vector and Tensor Algebra 4.6. The Principal Axes of a Tensor 121
4.6 The Principal Axes of a Tensor condition the shear stresses in orthogonal sections are equal
is described for the problem of the principal axes of a stress tensor, also called the directions of The stress tensor T assigns the resulting stress vector t(n) to the direction of the normal vector
principal stress. The Cauchy stress tensor in Cartesian coordinates is given by perpendicular to the section surface n. This linear mapping ful£lls the equilibrium conditions
This stress tensor T is symmetric because of the equilibrium conditon of moments. With this with the normal vector
n = n l el , (4.6.7)
k
¡ ¢
n·e = nl δlk k
= n = cos n, e k
,
t(n) dA(n)
x3 6 ±
n and the absolute value
¸
|n| = 1.
t(3) dA(3)
I The stress vector in direction of n is computed by
1
t(1) dA(1)
¡ ¢
t(n) = T ik ei ⊗ ek · nl el (4.6.8)
e3 6 = T ik nl (ek · el ) ei
- - = T ik nl δkl ei ,
e2 x2 (n)
ª t = T ik nk ei . (4.6.9)
e1
−t(2) dA(2) The action of the tensor T on the normal vector n reduces the order of the second order tensor
R T (stress tensor) to a £rst order tensor t (n) (i.e. the stress vector in direction of n).
1 Lemma 4.6. Principal axes problem Exists a direction no in space, in such a way, that the
ª x
resulting stress vector t(n0 ) is oriented in this direction, i.e. that the vector n0 ful£lls the following
equations?
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
122 Chapter 4. Vector and Tensor Algebra 4.6. The Principal Axes of a Tensor 123
Comparing equations (4.6.6) and (4.6.11) leads to T · n 0 = λ1 · n0 and therefore to equation (4.6.12) is denoted like this
³ ´
(T − λ1) · n0 = 0. (4.6.12) T̃ ik gi ⊗ gk − λg ik gi ⊗ gk · nl0 gl = 0, (4.6.19)
³ ´
For this special case of eigenvalue problem . . . T̃ ik − λg ik nl0 (gk · gl ) gi = 0,
³ ´
• the directions n0j are called the principal stress directions, and they are given by the eigen- T̃ ik − λg ik nl0 gkl gi = 0,
vectors. ³ ´
T̃ ik − λg ik n0k gi = 0,
• and the λj = τj are called the eigenvalues or resp. in this case the principal stresses.
and £nally in index notation
4.6.2 Components in a Cartesian Basis ³ ´
T̃ ik − λg ik = 0,
The equation (4.6.12) is rewritten with index notaion ³ ´
¡ ik ¢ det T̃ ik − λg ik = 0. (4.6.20)
T ei ⊗ ek − λδ ik ei ⊗ ek · nl0 el = 0, (4.6.13)
¡ ik ¢
T − λδ ik nl0 (ek · el ) ei = 0, And the same in mixed formulation is given by
¡ ik ¢
T − λδ ik nl0 δkl ei = 0, ¡ ¢
¡ ik ¢ Tki − λδki nk0 = 0,
T − λδ ik n0k ei = 0, ¡ i ¢
det Tk − λδki = 0. (4.6.21)
and £nally
¡ ¢ 4.6.4 Characteristic Polynomial and Invariants
T ik − λδ ik n0k = 0. (4.6.14)
The characteristic polynomial of an eigenvalue problem with the invariants I 1 , I2 , and I3 in a
This equation could be represented in matrix notation, because it is given in a Cartesian basis, 3-dimensional space E3 is de£ned by
¡£ ik ¤ £ ¤¢
T − λ δ ik [n0k ] = [0] , (4.6.15) f (λ) = I3 − λI2 + λ2 I1 − λ3 = 0. (4.6.22)
(T − λ1) n0 = 0,
11 For a Cartesian basis the equation (4.6.22) becomes a cubic equation, because of being an eigen-
T −λ T 12 T 13 n01 0
T 21 22
T −λ T 23
· n02 = 0 . (4.6.16) value problem in E3 with the invariants given by
T 31 T 32 T 33 − λ n03 0
I1 = tr T = gik T̃ ik = δik T ik = Tkk = T kk , (4.6.23)
This is a linear homogenous system of equations for n01 , n02 and n03 , then non-trivial solutuions 1£ ¤ 1 £ ii kk ¤
I2 = (tr T)2 − tr (T)2 = T T − T ik T ki , (4.6.24)
exist, if and only if 2 ¡ ¢ 2
det(T − λ1) = 0, (4.6.17) I3 = det T = det T ik . (4.6.25)
or in index notation
The fundamental theorem of algebra implies that there exists three roots λ 1 , λ2 , and λ3 , such that
det(T ik − λδ ik ) = 0. (4.6.18)
the following equations hold
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
124 Chapter 4. Vector and Tensor Algebra 4.6. The Principal Axes of a Tensor 125
i.e.
4.6.8 The Eigenvalue Problem in a General Basis
(λl − λj ) nT0j n0l ≡ 0. (4.6.34) Let T be an arbitrary tensor, given in the basis gi with i = 1, . . . , n, and de£ned by
This equation holds, if and only if λl − λj 6= 0 and nT0j n0l = 0. The conclusion of this relation
between the eigenvalues λj , λl , and the normal unit vectors n0j , and n0l is, that the normal vectors T = T̃ ik gi ⊗ gk . (4.6.39)
are orthogonal to each other.
The identity tensor in Cartesian coordinates 1 = δ ik ei ⊗ ek is substituted by the identity tensor
1, de£ned by
4.6.6 Real Eigenvalues of a Symmetric Tensors 1 = g ik gi ⊗ gk . (4.6.40)
Two complex conjugate eigenvalues are denoted by Then the eigenvalue problem
λj = β + iγ, (4.6.35) (T − λ1) · n0 = 0 (4.6.41)
λl = β − iγ. (4.6.36) is substituted by the eigenvalue problem in general coordinates given by
The coordinates of the associated eigenvectors n0j and n0l could be written as column matrices ³ ´
n0j and n0l . Furthermore the relations n0j = b + ic and n0l = b + ic hold. Comparing this with T̃ ik gi ⊗ gk − λg ik gi ⊗ gk · nl0 gl = 0, (4.6.42)
the equation (4.6.34) implies
with the vector n0 = nl0 gl in the direction of a principal axis. Begining with the eigenvalue
(λl − λj ) nT0j n0l = 0, problem
2iγ (b + ic)T (b − ic) = 0, ³ ´
¡ ¢
2iγ bT b + cT c = 0, (4.6.37) T̃ ik − λg ik · nl0 (gk · gl ) gi = 0
³ ´
bT b + cT c 6= 0, (4.6.38) T̃ ik − λg ik nl0 gkl gi = 0
³ ´
i.e. γ = 0 and the eigenvalues are real numbers. The result is a symmetric stress tensor with
T̃ ik − λg ik n0k gi = 0, (4.6.43)
three real principale stresses and the associated directions being orthogonal to each other.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
126 Chapter 4. Vector and Tensor Algebra 4.7. Higher Order Tensors 127
u = u i gi = u i g i , (4.7.6)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
128 Chapter 4. Vector and Tensor Algebra 4.7. Higher Order Tensors 129
4.7.3 The Complete Permutation Tensor Really important is the so called elasticity tensor C used in elasticity theory. This is a fourth
order tensor, which maps the strain tensor ε onto the stress tensor σ,
The most important application of a third order tensor is the third order permutation tensor, which
is the correct description of the permutation symbol in section (4.2). The complete permutation σ = Cε. (4.7.18)
tensor is antisymmetric and in the space E3 just represented by a scalar quantity (positive or
negative), see also the permutation symbols e and ε in section (4.2). The complete permutation Comparing this with the well known unidimensional Hooke’s law it is easy to see that this map-
tensor in Cartesian coordinates or also called the third order fundamental tensor is de£ned by ping is the generalized 3-dimensional linear case of Hooke’s law. The elasticity tensor C has in
3 3 general in space E3 the total number of 34 = 81 components. Because of the symmetry of the
E = eijk ei ⊗ ej ⊗ ek , or E = eijk ei ⊗ ej ⊗ ek . (4.7.10) strain tensor ε and the stress tensor σ this number reduces to 36. With the potential character
For the orthonormal basis ei the components of the covariant eijk and contravariant e permu- ijk of the elastic stored deformation energy the number of components reduces to 21. For an elastic
tation tensor (symbol) are equal and isotropic material there is another reduction to 2 independent constants, e.g. the Young’s
ei × ej = eijk · ek . (4.7.11) modulus E and the Poisson’s ratio ν.
Equation (4.2.26) in section 3(4.2) is the short form of a product in Cartesian coordinates be-
tween the third order tensor E and the second order identity tensor. This scalar product of the 4.7.5 Tensors of Various Orders
permutation tensor and ei ⊗ ej from the right-hand side yields, Higher order tensor are represented with the dyadic products of vectors, e.g. a simple third order
¡ ¢ tensor and a complete third order tensor,
ei × ej = ersk er ⊗ es ⊗ ek : (ei ⊗ ej )
n n
= ersk δis δjk er = erij er , 3 X X
B= ai ⊗ b i ⊗ c i = Ti ⊗ ci = B ijk gi ⊗ gj ⊗ gk , (4.7.19)
ei × ej = eijr er , (4.7.12) i=1 i=1
or with ej ⊗ ei from the left-hand side yields and a simple fourth order tensor and a complete fourth order tensor,
¡ ¢
ei × ej = (ei ⊗ ej ) : ersk er ⊗ es ⊗ ek n n
X X
= ersk δir δjs ek , C= ai ⊗ b i ⊗ c i ⊗ d i = Si ⊗ Ti = C ijkl gi ⊗ gj ⊗ gk ⊗ gl . (4.7.20)
k i=1 i=1
ei × ej = eijk e . (4.7.13)
For example the tensors form order zero till order four are summarized in index notation with a
4.7.4 Introduction of a Fourth Order Tensor basis gi ,
The action of a fourth order tensor C, given by a scalar quantity, or a tensor of order zero
(0)
α = α, (4.7.21)
(1)
C = C ijkl (gi ⊗ gj ⊗ gk ⊗ gl ) , (4.7.14) a vector, or a £rst order tensor v = v = vi g , i
(4.7.22)
(2)
on a second order tensor T given by a second order tensor T = T = T jk gj ⊗ gk , (4.7.23)
3
mn a third order tensor B = B ijk gi ⊗ gj ⊗ gk , (4.7.24)
T=T (gm ⊗ gn ) , (4.7.15)
a fourth order tensor C = C = C ijkl gi ⊗ gj ⊗ gk ⊗ gl . (4.7.25)
is given in index notation, see also equation (4.4.104), by
¡ ¢
S = C : T = C ijkl gi ⊗ gj ⊗ gk ⊗ gl : (T mn gm ⊗ gn ) , (4.7.16)
= C ijkl T mn (gk · gm ) (gl · gn ) (gi ⊗ gj ) ,
= C ijkl T mn gkm gln (gi ⊗ gj ) = C ijkl Tkl (gi ⊗ gj ) ,
S = S ij gi ⊗ gj . (4.7.17)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
130 Chapter 4. Vector and Tensor Algebra
Chapter 5
S IMMONDS [12], H ALMOS [6], A BRAHAM, M ARSDEN, and R ATIU [1], and M ATTHEWS [11].
And in german DE B OER [3], S TEIN ET AL . [13], and I BEN [7].
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 131
132 Chapter 5. Vector and Tensor Analysis 5.1. Vector and Tensor Derivatives 133
The second derivative of the scalar-valued vector function v̂ (α) at a value α is given by
µ ¶
dy (α) d dv
= = v00 . (5.1.10)
dα dα dα
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
134 Chapter 5. Vector and Tensor Analysis 5.1. Vector and Tensor Derivatives 135
The Taylor series of the scalar-valued tensor function, see equation (5.1.3), at a value α is given With this partial derivatives β,i the exact differential of the function β is given by
by ¡ ¢
T̂ (α + τ ) = T̂ (α) + Y (α) · τ + O τ 2 . (5.1.11) dβ = β,i dαi . (5.1.24)
This implies the derivative of a tensor w.r.t. a scalar quantity, The partial derivatives of the vector-valued function (5.1.21) w.r.t. the scalar variable α i are
de£ned by
dT T̂ (α + τ ) − T̂ (α)
Y (α) = = T0 = lim . (5.1.12)
dα τ →0 τ ∂v v̂ (α1 , . . . , αi + τ, . . . , αn ) − v̂ (α1 , . . . , αi , . . . , αn )
= v,i = lim , (5.1.25)
In the following some important identities are listed, ∂αi τ →0 τ
TT−1 = 1,
¡ ¢0 ¡ ¢0 5.1.3 The Moving Trihedron of a Space Curve in Euclidean Space
TT−1 = T0 T−1 + T T−1 = 0,
¡ ¢0 A vector function x = x (Θ1 ) with one variable Θ1 in the Euclidean vector space E3 could be
⇒ −T0 T−1 = T T−1 ,
¡ ¢0 represented by a space curve. The vector x (Θ1 ) is the position vector from the origin O to the
⇒ T−1 = −T−1 T0 T−1 . point P on the space curve. The tangent vector t (Θ1 ) at a point P is then de£ned by
¡ ¢ ¡ ¢ dx (Θ1 )
5.1.2 Functions of more than one Scalar Variable t Θ1 = x 0 Θ1 = . (5.1.29)
dΘ1
Like for the functions of one scalar varaible it is also possible to de£nite varoius functions of The tangent unit vector or just the tangent unit of the space curve at a point P with the position
more than one scalar variable, e.g. vector x is de£ned by
β = β̂ (α1 , α2 , . . . , αi , . . . , αn ) a scalar-valued function of multiple variables, (5.1.20) ∂x ∂Θ1 dx
v = v̂ (α1 , α2 , . . . , αi , . . . , αn ) a vector-valued function of multiple variables, (5.1.21) t (s) = = , and |t (s)| = 1. (5.1.30)
∂Θ1 ∂s ds
and £nally The normal vector at a point P on a space curve is de£ned with the derivative of the tangent
vector w.r.t. the curve parameter s by
T = T̂ (α1 , α2 , . . . , αi , . . . , αn ) a tensor-valued function of multiple variables. (5.1.22)
∗ dt d2 x
n= = 2. (5.1.31)
In stead of establishing the total differentials like in the section before, it is now necessary to ds ds
establish the partial derivatives of the functions w.r.t. the various variables. Starting with the
The term 1/ρ is a measure of the curvature or just the curvature of a space curve at a point P .
scalar-valued function (5.1.20), the partial derivative w.r.t. the i-th scalar variable α i is de£ned ∗
by The normal vector n at a point P is perpendicular to the tangent vector t at this point,
∂β β̂ (α1 , . . . , αi + τ, . . . , αn ) − β̂ (α1 , . . . , αi , . . . , αn ) ∗ ∗
= β,i = lim . (5.1.23) n⊥t i.e. n · t = 0,
∂αi τ →0 τ
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
136 Chapter 5. Vector and Tensor Analysis 5.1. Vector and Tensor Derivatives 137
- n M
s = s (Θ1 ) P
µ
t (Θ1 ) t
q 1
∆x
x (Θ1 ) Q
R
1
e3 6 ρ
x (Θ1 + ∆Θ1 )
- ª
O e2 b
ª
e1 M
Figure 5.1: The tangent vector in a point P on a space curve. Figure 5.2: The moving trihedron.
and the curvature is given by The so called binormal unit vector b or just the binormal unit is the vector perpendicular to the
tangent vector t, and the normal vector n at a point P , and de£ned by
1 d2 x d2 x
= 2 · 2. (5.1.32) b = t × n. (5.1.34)
ρ2 ds ds
The absolute value |b| of the binormal unit is a measure for the torsion of the curve in space at a
The proof of this assumption starts with the scalar product of two tangent vectors,
point P , and the derivative of the binormal vector w.r.t. the curve parameter s implies,
t · t = 1, db dt dn
d = ×n+t×
(t · t) = 0, ds ds ds
ds ∗ dn
dt =n×n+t×
2 · t = 0, ds
ds 1
=0+ n
τ
and £nally results, that the scalar product of the derivative w.r.t. the curve parameter and the db 1
tangent vector equals zero, i.e. this two vectors are perpendicular to each other, = n. (5.1.35)
ds τ
dt This yields the de£nition
⊥t.
ds 1
of the torsion of a curve at a point P ,
This implies the de£nition τ
and with equation (5.1.35) the torsion is given by
1 ¯¯ ∗ ¯¯
= ¯n ¯ of the curvature of a curve at a point P . µ ¶ 3
ρ 1 dx d2 x dx
= −ρ2 × 2 · 3. (5.1.36)
τ ds ds ds
With the curvature 1ρ the normal unit vector n or just the normal unit is de£ned by
The three unit vectors t, n and b form the moving trihedron of a space curve in every point P .
∗
n = ρ · n. (5.1.33) The derivatives w.r.t. the curve parameter ds are the so called Serret-Frenet equations, and given
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
138 Chapter 5. Vector and Tensor Analysis 5.1. Vector and Tensor Derivatives 139
below, With this relation the other metric coef£cients are given by
5.1.4 Covariant Base Vectors of a Curved Surface in Euclidean Space ds2 = dx · x = x,α dΘα · x,β dΘβ ,
The vector-valued function x = x (Θ1 , Θ2 ) with two scalar variables Θ1 , and Θ2 represents = aα · aβ dΘα dΘβ = aαβ dΘα dΘβ ,
a curved surface in the Euclidean vector space E3 . The covariant base vectors of the curved
and £nally
q
a2 Θ2 ⇒ ds = aαβ dΘα dΘβ . (5.1.45)
a3 o ¸
1ds
The differential element of area dA is given by
√
P dA = a dΘ1 dΘ2 . (5.1.46)
*
z
a1 The contravariant base vectors of the curved surface are computed with the metric coef£cients
and the covariant base vectors,
e3 6
x Θ1 aα = aαβ aβ , (5.1.47)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
140 Chapter 5. Vector and Tensor Analysis 5.1. Vector and Tensor Derivatives 141
Θ3 Θ3
g2M
± g3
Θ1 Θ1
Θ2 Θ2
1
x3 g1
3 3
P P
e3 6 e3 6
x x
- -
O e2 x2 O e2
e1 e1
ª ª
x1
Figure 5.4: Curvilinear coordinates in a Cartesian coordinate system. Figure 5.5: The natural basis of a curvilinear coordinate system.
• the function is at least one-time continuous differentiable, position vectors of the points along this curvilinear coordinate. The vector x w.r.t. the covariant
basis is given by
• and the Jacobian or more precisely the determinant of the Jacobian matrix is not equal to x = x̄i gi . (5.1.55)
zero, · i¸ For each covariant natural basis gk an associated contravariant basis with the contravariant base
∂x vectors of the natural basis g i is de£ned by
J = det 6= 0. (5.1.52)
∂Θk
gk gi = δki . (5.1.56)
The vector x w.r.t. the curvilinear coordinates is represented by The vector x w.r.t. the contravariant basis is represented by
¡ ¢
x = x̂i Θ1 , Θ2 , Θ3 ei . (5.1.53) x = x̄¯i gi . (5.1.57)
The covariant coordinates x̄ and the contravariant coordinates x̄¯i of the position vector x are
i
The Γsik are the components of the Christoffel symbol Γ(i) . The Christoffel symbol could be 5.2 Derivatives and Operators of Fields
described by a second order tensor w.r.t. the basis gi ,
5.2.1 De£nitions and Examples
Γ(i) = Γsij gs ⊗ gj . (5.1.61)
A function of an Euclidean vector, for example of a vector of position x ∈ E 3 is called a £eld.
With this de£nition of the Christoffel symbol as a second order tensor a linear mapping of the The £elds are seperated in three classes by their value,
base vector gk is given by
α = α̂ (x) the scalar-valued vector function or scalar £eld, (5.2.1)
¡ ¢
gi,k = Γ(i) · gk = Γsij gs ⊗ gj · gk (5.1.62) v = v̂ (x) the vector-valued vector function or vector £eld, (5.2.2)
s
¡ j ¢ s j
= Γij g · gk gs = Γij δk gs ,
and
and £nally
T = T̂ (x) the tensor-valued vector function or tensor £eld. (5.2.3)
gi,k = Γsik gs . (5.1.63) For example some frequently used £elds in the Euclidean vector space are,
Equation (5.1.63) is again the de£niton of the Christoffel symbol, like in equation (5.1.60). With • scalar £elds - temperature £eld, pressure £eld, density £eld,
this relation the components of the Christoffel symbol could be computed, like this
• vector £elds - velocity £eld, acceleration £eld,
gi,k · gs = Γrik gr · gs = Γrik δrs = Γsik . (5.1.64)
• tensor £elds - stress state in a volume element.
Like by any other second order tensor, the raising and lowering of the indices of the Christoffel
symbol is possible with the contravariant metric coef£cients g ls , and with the covariant metric 5.2.2 The Gradient or Frechet Derivative of Fields
coef£cients g ls , e.g.
Γikl = gls Γsik . (5.1.65) A vector-valued vector function or vector £eld v = v (x) is differentiable at a point P repre-
sented by a vector of position x, if the following linear mapping exists
Also important are the relations between the derivatives of the metric coef£cients w.r.t. to the
¡ ¢
coordinates Θi and the components of the Christoffel symbol, v̂ (x + y) = v̂ (x) + L (x) · y + O y2 , and |y| → 0, (5.2.4)
1 v̂ (x + y) − v̂ (x)
Γikl = (gkl,i + gil,k gik,l ) . (5.1.66) L (x) 1 = lim . (5.2.5)
2 |y|→0 |y|
The linear mapping L (x) is called the gradient or the Frechet derivative
The gradient grad v̂ (x) of a vector-valued vector function (vector £eld) is a tensor-valued func-
tion (second order tensor depending on the vector of position x). For a scalar-valued vector
function or a scalar £eld α̂ (x) there exists an analogue to equation (5.2.4), resp. (5.2.5),
¡ ¢
α̂ (x + y) = α̂ (x) + l (x) · y + O y2 , and |y| → 0, (5.2.7)
α̂ (x + y) − α̂ (x)
l (x) · 1 = lim . (5.2.8)
|y|→0 |y|
And with this relation the gradient of the scalar £eld is a vector-valued vector function (vector
£eld) given by
l (x) = grad α̂ (x) . (5.2.9)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
144 Chapter 5. Vector and Tensor Analysis 5.2. Derivatives and Operators of Fields 145
Finally for a tensor-valued vector function or a tensor £eld T̂ (x) the relations analogue to equa- 5.2.4 The Derivatives of Base Vectors
tion (5.2.4), resp. (5.2.5), are given by
In section (5.1) the , resp. the partial derivatives of base vectors g i w.r.t. the coordinates Θk
3 3 ¡ ¢ were introduced by
2
T̂ (x + y) = T̂ (x) + L (x) · y + O y , and |y| → 0, (5.2.10) ∂gi
= gi,k = Γsik gs . (5.2.22)
3 T̂ (x + y) − T̂ (x) ∂Θk
L (x) 1 = lim . (5.2.11)
|y|→0 |y| With the Christoffel symbols de£ned by equations (5.1.60) and (5.1.61),
The gradient of the second order tensor £eld is a third order tensor-valued vector function (third Γ(i) = Γsij gs ⊗ gj , (5.2.23)
order tensor £eld) given by
3
L (x) = grad T̂ (x) . (5.2.12) the derivatives of base vectors are rewritten,
The gradient of a second order tensor grad (v ⊗ w) or grad T is a third order tensor, because of gi,k = Γ(i) · gk . (5.2.24)
(grad v) ⊗ w being a dyadic product of a second order tensor and a vector ("£rst order tensor")
is a third order tensor. For arbitrary scalar £elds α, β ∈ R, vector £elds v, w ∈ V, and tensor The de£nition of the gradient, equation (5.2.1), compared with equation (5.2.23) shows, that the
£eld T ∈ V ⊗ V the following identities hold, Christoffel symbols are computed by
5.2.3 Index Notation of Base Vectors Finally the gradient of a base vector is represented in index notation by
Most of the up now discussed relations hold for all n-dimensional Euclidean vector spaces E n , grad gi = gi,j ⊗ gj = Γsij gs ⊗ gj . (5.2.27)
but most uses are in continuum mechanics and in the 3-dimensional Euclidean vector space E 3 .
The scalar-valued, vector-valued or tensor-valued functions depend on a vector x ∈ V, e.g. a 5.2.5 The Covariant Derivative
vector of position at a point P . In the sections below the following basis are used, the curvilinear
coordinates Θi with the covariant base vectors, Let v = v̂ (x) = v i gi be a vector £eld, see equation (5.2.14), then the gradient of the vector £eld
is given by
∂x ¡ ¢
gi = = x,i , (5.2.20) grad v = grad v̂ (x) = grad v i gi = gi ⊗ grad v i + v i grad gi . (5.2.28)
∂Θi
and the Cartesian coordinates xi = xi with the orthonormal basis The gradient of a scalar-valued vector function α (x) is de£ned by
∂α i
ei = e i . (5.2.21) grad α = g = α,i gi , (5.2.29)
∂Θi
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
146 Chapter 5. Vector and Tensor Analysis 5.2. Derivatives and Operators of Fields 147
than the gradient of the contravariant coef£cients v i (x) in the £rst term of equation (5.2.28) is This de£nition implies another notation for the gradient in a 3-dimensional Cartesian basis of the
given by Euclidean vector space,
grad v i = v,k
i k
g . (5.2.30) grad α = ∇α. (5.2.39)
Equation (5.2.30) in (5.2.28), and together with equation (5.2.25) the complete gradient of a
vector £eld could be given by Let v = v̂ (x) be a vector £eld in a space with the Cartesian basis e i ,
i k
grad v = gi ⊗ v,k g + v i Γi , (5.2.31) v = v̂ (x) = v i ei = vi ei = vi ei , with vi = vi (x1 , x2 , x3 ) , (5.2.40)
and £nally with equation (5.2.23), than the gradient of the vector £eld in a Cartesian basis is given with the relation of equation
i k
(5.2.14) by
grad v = v,k gi ⊗g + v i Γsik gs ⊗g .k
(5.2.32)
grad v = grad v̂ (x) = grad (vi ei ) = ei ⊗ grad vi + vi grad ei . (5.2.41)
The dummy indices i and s are changed like this,
Computing the second term of equation (5.2.41) implies, that all derivatives of base vectors w.r.t.
i ⇒ s , and s ⇒ i. the vector of position x, see the de£nition (5.2.38), are equal to zero,
Rewritting equation (5.2.32), and factor out the dyadic product, implies ∂ (ei ) ∂ (ei ) ∂ (ei ) ∂ (· · · )
¡ i ¢¡ ¢ grad ei = (ei ),j ej = e1 + e2 + e3 = ei = 0. (5.2.42)
grad v = v,k + v s Γisk gi ⊗ gk . (5.2.33) ∂x1 ∂x2 ∂x3 ∂xi
∂ (. . .) ∂ (. . .) ∂ (. . .)
∇ = (. . .),i ei = e1 + e2 + e3 , div T = grad (T) : 1 , and ∀T ∈ V ⊗ V (5.2.46)
∂x1 ∂x2 ∂x3
and £nally de£ned by and must be a vector-valued quantity, because the scalar product of the second order unit tensor
1 and the third order tensor grad (T) is a vector-valued quantity . Another possible de£nition is
∂ (· · · ) given by
∇= ei . (5.2.38) ¡ ¢ ¡ ¢
∂xi a · div T = div TT a = grad TT a : 1. (5.2.47)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
148 Chapter 5. Vector and Tensor Analysis 5.2. Derivatives and Operators of Fields 149
For an arbitrary scalar £eld α ∈ R, and arbitrary vector £elds v, w ∈ V, and arbitrary tensor The divergence of a second order tensor is a vector, see also equation (5.2.52),
£elds S, T ∈ V ⊗ V the following identities hold, ¡ ¢
div T = div T ik gi ⊗ gk
£ ¡ ik ¢¤
div (αv) = v · grad α + α div v, (5.2.48) = grad T gi gk + [div gk ] T ik gi , (5.2.61)
div (αT) = T grad α + α div T, (5.2.49) and with equation (5.2.14),
div (grad v)T = grad (div v) , (5.2.50) £ ¤
div T = gi ⊗ grad T ik + T ik grad gi gk + grad gk · 1T ik gi
div (v × w) = (grad v × w) : 1 − (grad w × v) : 1 (5.2.51) £ ¤ ¡ ¢ ¡ ¢
= gi ⊗ T,jik gj + T ik Γsij gs ⊗ gj gk + Γskj gs ⊗ gj : δlr gl ⊗ gr T ik gi
= w · rot v − v · rot w,
= gi T,jik δkj + T ik Γsij gs δkj + Γskj δlr δsl δjr T ik gi
div (v ⊗ w) = (grad v) w + (div w) v, (5.2.52)
¡ ¢
div (Tv) = div TT · v + TT : grad v, (5.2.53) = T,kik gi + T ik Γsik gs + Γjkj T ik gi ,
div (v × T) = v × div T + grad v × T, (5.2.54) and £nally after renaming the dummy indices,
div (TS) = (grad T) S + T div S. (5.2.55) ¡ ¢
div T = T,lkl + T lm Γklm + T km Γlml gk . (5.2.62)
The term T kl |l de£ned by
5.2.8 Index Notation of the Divergence of Vector Fields
T kl |l = T,lkl + T lm Γklm + T km Γlml , (5.2.63)
Let v = v̂ (x) = v i gi ∈ V be a vector £eld, with v i = v̂ i (Θ1 , Θ2 , Θ3 , . . . , Θn ), than a basis is
given by is the so called covariant derivative of the tensor coef£cients w.r.t. the coordinates Θ l , than the
∂x divergence is given by
gi = x,i = . (5.2.56) div T = T kl |l gk . (5.2.64)
∂Θi
The de£niton of the divergence (5.2.45) of a vector £eld with using the index notation of the Other representations are possible, e.g. a mixed formulation is given by
gradient (5.2.35) implies div T = T kl |k gl , (5.2.65)
£ ¡ ¢¤
div v = grad v : 1 = v i |k gi ⊗ gk : [δsr (gr ⊗ gs )] and with the covariant derivative T kl |k ,
¡ l ¢
=v i
|k δsr gir g ks i
= v |k gis g ks
=v i
|k δik , T kl |k = Tl,k − Tnk Γnlk + Tln Γkkn (5.2.66)
div v = v i |i . (5.2.57)
5.2.10 The Divergence in a 3-dim. Cartesian Basis in Euclidean Space
The divergence of a vector £eld is a scalar quantity and an invariant.
A vector £eld v in a Cartesian basis e i ∈ E3 is represented by equation (5.2.40) and its gradient
by equation (5.2.41). The divergence de£ned by (5.2.45) rewritten with using the de£nition of
5.2.9 Index Notation of the Divergence of Tensor Fields the gradient of a vector £eld in a Cartesian basis (5.2.44) is given by
Let T be a tensor, given by div v = grad v : 1 = (vi,k ei ⊗ ek ) : (δrs er ⊗ es )
= vi,k δrs δir δks = vi,k δik
T = T̂ (x) = T ik (gi ⊗ gk ) ∈ V ⊗ V, (5.2.58)
div v = vi,i
and The divergence of a vector £eld in the 3-dimensional Euclidean space with a Cartesian basis E 3
¡ ¢ is a scalar invariant, and is given by
T ik = T̂ ik Θ1 , Θ2 , Θ3 , . . . , Θn , (5.2.59) div v = vi,i , (5.2.67)
or in its complete description by
and with equation (5.2.47) the divergence of this tensor is given by
∂v1 ∂v2 ∂v3
div v = + + . (5.2.68)
div T = grad T : 1 , and 1 = δsr gr ⊗ gs . (5.2.60) ∂x1 ∂x2 ∂x3
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
150 Chapter 5. Vector and Tensor Analysis 5.2. Derivatives and Operators of Fields 151
5.2.11 Rotation or Curl of Vector and Tensor Fields and ∆v is a vector-valued quantity. The de£nition of the laplacian of a tensor £eld ∆T is given
3 by
The rotation of a vector £eld v (x) is de£ned with the fundamental tensor E by
∆T = (grad grad T) 1, (5.2.86)
3
rot v = E (grad v)T . (5.2.69) and ∆T is a tensor-valued quantity. For an arbitrary vector £eld v ∈ V, and an arbitrary tensor
In English textbooks in most cases the curl operator curl instead of the rotation operator rot is £eld T ∈ V ⊗ V the following identities hold,
used, h i
rot v = curl v. (5.2.70) div grad v ± (grad v)T = ∆v ± grad div v, (5.2.87)
The rotation, resp. curl, of a vector £eld rot v (x) or curl v (x) is a unique vector £eld. Some- rot rot v = grad div v − ∆v, (5.2.88)
times another de£nition of the rotation of a vector £eld is given by ∆ tr T = tr ∆T (5.2.89)
rot v = div (1 × v) = 1 × grad v. (5.2.71) rot rot T = −∆T + grad div T + (grad div T)T ,
− grad grad tr T + 1 [∆ (tr T) − div div T] . (5.2.90)
For an arbitrary scalar £eld α ∈ R, and arbitrary vector £elds v, w ∈ V, and an arbitrary tensor
£eld T ∈ V ⊗ V the following identities hold, Finally, if the tensor £eld T is symmetric and de£ned by T = S − 1 tr S, with the symmetric
part given by S, then the following identity holds,
rot grad α = 0, (5.2.72)
div rot v = 0, (5.2.73) rot rot T = −∆S + grad div S + (grad div S)T − 1 div div S. (5.2.91)
rot grad v = 0, (5.2.74)
rot (grad v)T = grad rot v, (5.2.75)
rot (αv) = α rot v + grad α × v, (5.2.76)
rot (v × w) = v div w − grad wv − w div v + grad vw
= div (v ⊗ w − w ⊗ v) , (5.2.77)
div rot T = rot div TT , (5.2.78)
T
div (rot T) = 0, (5.2.79)
(rot rot T)T = rot rot TT , (5.2.80)
T
rot (α1) = − [rot (α1)] , (5.2.81)
rot (Tv) = rot TT v + (grad v)T × T. (5.2.82)
Also important to notice is that, if the tensor £eld T is symmetric, then the following identity
holds,
rot T : 1 = 0. (5.2.83)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
152 Chapter 5. Vector and Tensor Analysis 5.3. Integral Theorems 153
The surface integral of a tensor product of a vector £eld u (x) and an area vector da should Let ũi be de£ned as the mean value of the vector £eld u in the area element i, for example for
be transformed into a volume integral. The volume element dV with the surface dA is given at i = 1 a Taylor series is derived like this,
a point P by the position vector x in the Euclidean space E3 . The surface dA of the volume
∂u dΘ2 ∂u dΘ3
element dV is described by the six surface elements represented by the aera vectors da 1 , . . ., ũ1 = u + 2
+ . (5.3.5)
and da6 . ∂Θ 2 ∂Θ3 2
The Taylor series for the area element i = 4 is given by
3 2 ∂ ũ1 1
Θ Θ
ũ4 = ũ1 + 1
dΘ ,
∂Θh i
∂u dΘ2 ∂u dΘ3
dA ∂ u + ∂Θ 2 2 + ∂Θ3 2
: ũ4 = ũ1 + dΘ1 ,
6 ∂Θ1
5 ũ4 =
4 and £nally
y ∂ ũ1
ũ1 + ∂Θ 1 dΘ
1
ũ1 dV
± u ∂ 2 u dΘ2 1 ∂ 2 u dΘ3 1
Θ1 ũ4 = ũ1 + dΘ1 + dΘ + dΘ . (5.3.6)
1 ∂Θ 1 1 2
∂Θ ∂Θ 2 ∂Θ1 ∂Θ3 2
da1 3 2
) 3 : Considering only the linear terms for i = 4, 5, 6 implies
dΘ2 g2
3
dΘ g3 dΘ1 g1 ∂u
ũ4 = ũ1 + dΘ1 , (5.3.7)
∂Θ1
3
∂u
e3 6 P ũ5 = ũ2 + dΘ2 , (5.3.8)
x ∂Θ2
- s
u and
O e2 ∂u
ª
e1 ũ6 = ũ3 + dΘ3 . (5.3.9)
∂Θ3
Figure 5.6: The volume element dV with the surface dA. The surface integral is approximated by the sum of the six area elements,
Z 6
X
√ u ⊗ da = ũi ⊗ dai . (5.3.10)
da1 = dΘ2 dΘ3 g3 × g2 = −dΘ2 dΘ3 gg1 = −da4 , (5.3.1) dA
i=1
3√
da2 = dΘ dΘ g1 × g3 = −dΘ dΘ gg2 = −da5 ,
1 3 1
(5.3.2)
This equation is rewritten with all six terms,
and 6
X
1 2 1 2√ 3 ũi ⊗ dai = ũ1 ⊗ da1 + ũ2 ⊗ da2 + ũ3 ⊗ da3 + ũ4 ⊗ da4 + ũ5 ⊗ da5 + ũ6 ⊗ da6 ,
da3 = dΘ dΘ g2 × g1 = −dΘ dΘ gg = −da6 . (5.3.3)
i=1
The volume of the volume element dV is given by the scalar triple product of the tangential
vectors gi associated to the curvilinear coordinates Θi at the point P , inserting equations (5.3.1)-(5.3.3)
dV = (g1 × g2 ) · g3 dΘ1 dΘ2 dΘ3 , = (ũ1 − ũ4 ) ⊗ da1 + (ũ2 − ũ5 ) ⊗ da2 + (ũ3 − ũ6 ) ⊗ da3 ,
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
154 Chapter 5. Vector and Tensor Analysis 5.3. Integral Theorems 155
and £nally with equations (5.3.7)-(5.3.9), volumes are summed over, then the dyadic products of the inner surfaces vanish, because every
dyadic product appears twice. Every area vector da appears once with the normal direction n
6 and once with the opposite direction −n. The vector £eld u is by de£nition continuous, i.e. for
X ∂u ∂u ∂u
ũi ⊗ dai = − dΘ1 ⊗ da1 − dΘ2 ⊗ da2 − dΘ3 ⊗ da3 . (5.3.11) each of the two sides of an inner surface the value of the vector £eld is equal. In order to solve
i=1
∂Θ1 ∂Θ2 ∂Θ3
the whole problem it is only necessary to sum (to integrate) the whole outer surface dA with the
Equations (5.3.1)-(5.3.3) inserted in (5.3.11) implies normal unit vector n. If the summation over all subvolumes dV with the surfaces da is rewritten
as an integral, like in equation (5.3.13), then for the whole volume and surface the following
6 µ ¶ relation holds,
∂u ∂u ∂u 3 √
X nV Z
1 2
gdΘ1 dΘ2 dΘ3 ,
Z Z
ũi ⊗ dai = ⊗ g + ⊗ g + ⊗ g X
i=1
∂Θ1 ∂Θ2 ∂Θ3 u ⊗ da = u ⊗ da = grad u dV , (5.3.14)
i=1 dA A V
with the summation convention i = 1, . . . , 3, and with da = dan
nV Z
X Z Z
µ ¶
∂u i u ⊗ n da = u ⊗ n da = grad u dV . (5.3.15)
= ⊗ g dV ,
∂Θi i=1
dA A V
With this integral theorems it would be easy to develop integral theorems for scalar £elds, vector
and £nally with the de£nition of the gradient
£elds and tensor £elds.
6
X
ũi ⊗ dai = grad u dV . (5.3.12) 5.3.2 Gauss’s Theorem for a Vector Field
i=1
The Gauss’s theorem is de£ned by
Comparing this result with equation (5.3.10) yields Z Z
u · n da = div u dV . (5.3.16)
Z 6 µ ¶
X ∂u A V
u ⊗ da = ũi ⊗ dai = i
⊗ gi dV = grad u dV . (5.3.13)
i=1
∂Θ Proof. Equation (5.3.15) is multiplied scalar with the unit tensor 1 from the left-hand side,
dA Z Z
Equation (5.3.13) holds for every subvolume dV with the surface dA. If the terms of the sub- 1 : u ⊗ n da = 1 : grad u dV , (5.3.17)
A V
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
156 Chapter 5. Vector and Tensor Analysis 5.3. Integral Theorems 157
5.3.3 Divergence Theorem for a Tensor Field It is possible to use Gauss’s theorem (5.3.16), because αa is a vector, this implies
Z Z
The divergence theorem is de£ned by
Z Z a · αn da = div (αa) dV . (5.3.24)
A A
T · n da = div T dV . (5.3.21)
A V Using the identity (5.2.48) and the vector a being constant yields
Proof. If the vector a is constant, then this implies div (αa) = a grad α + α div a = a grad α + 0 = a grad α. (5.3.25)
Z Z Z Z
¡ T ¢
a · T · n da = a · T · n da = n · TT · a da = T · a · n da. (5.3.22) Inserting relation (5.3.25) in equation (5.3.24) implies
A A A A Z Z
a · αn da = a · grad α dV ,
With TT · a = u being a vector it is possible to use Gauss’s theorem (5.3.16), this implies
A V
Z Z
¡ T ¢ Z Z
a · T · n da = T · a · n da
a · αn da − grad α dV = 0,
A
ZA A V
¡ ¢
= div TT · a dV ,
and £nally the identity
V
Z Z
with equation (5.2.47) and the vector a being constant,
αn da = grad α dV .
¡ ¢
div TT a = (div T) a + T grad a = div Ta + 0 = a div T, A V
Z Z
a · T · n da − div T dV = 0,
A V
5.3.5 Integral of a Cross Product or Stokes’s Theorem
and £nally The Stoke’s theorem for the cross product of a vector £eld u and its normal vector n is de£ned
Z Z by Z Z
T · n da = div T dV . n × u da = rot u dV .. (5.3.26)
A V
A V
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
158 Chapter 5. Vector and Tensor Analysis
and £nally
Z Z Chapter 6
n × u da = rot u dV .
A V
Exercises
5.3.6 Another Interpretation of Gauss’s Theorem
The Gauss’s theorem, see equation (5.3.16), could be established by inverting the de£nition of
the divergence. Let u (x) be a continuous and differentiable vector £eld. The volume integral is
approximated by a limit, see also (5.3.10), where the whole surface is approximated by surface
elements dai , Z X
u (x) dV = lim ũi ∆Vi . (5.3.29)
∆Vi →0
V i
Let ũi be the mean value in a subvolume ∆Vi . The volume integral of the divergence of the
vector £eld u with inserting the relation of equation (5.3.29) is given by
Z X³ ´
div u dV = lim ^
div ui ∆Vi . (5.3.30)
∆Vi →0
V i
The mean value ũi in equation (5.3.30) is replaced by the identity of equation (5.3.31),
R
³ ´ ui · da
^
div ui =
∆ai
,
∆Vi R
u · da
Z X ∆ai i
div u dV = lim .
∆Vi →0
i
∆Vi
V
and £nally with the suammation of all subvolumes, like at the begining of this section,
Z Z Z
div u dV = u · da = u · nda. (5.3.32)
V A A
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 159
160 Chapter 6. Exercises Chapter Table of Contents 161
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
162 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 163
6.1 Application of Matrix Calculus on Bars and Plane Trusses see also free-body diagram (6.2). This relations imply
F
6.1.1 A Simple Statically Determinate Plane Truss S1 = S 2 , and S1 + S2 = , (6.1.3)
cos α
A very simple truss formed by two bars is given like in £gure (6.1), and loaded by an arbitrary
force F in negative y-direction. The discrete values of the various quantities, like the Young’s and £nally
F
S1 = S 2 = . (6.1.4)
2 cos α
3 1 The equilibrium conditions of forces at the node 1 are given in horizontal direction by
6F3y 6F1y
bar II, l, A2 , E2 α α bar I, l, A1 , E1
y 3 -
1 -
2 y F3x
6 α α F1x
6
F
? S2 R
- α = 45o ª S1
x -
x
mdoulus Ei or the sectional area Ai , are of no further interest at the moment. Only the forces in
direction of the bars are to be computed. The equilibrium conditions of forces at the node 2 are X
FH = 0 = F1x − S1 cos α, (6.1.5)
I µ
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
164 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 165
see also the left-hand side of the free-body diagram (6.3). The £rst one of this two relations SII 4 -
yields with equation (6.1.4) 6 α2
F2y x̃2 Á Fx = 10kN
α3
F cos α 1 y Á SI
F3x = −S2 cos α = − , and £nally F 3x = − F , (6.1.11) + SIII
2 cos α 2 α2 À
6 2 - SII
?
and the second one implies F2x
-
F sin α 1 x
F3y = S2 sin α = , and £nally F 3y = F . (6.1.12)
2 cos α 2
The stresses in normal direction of the bars and the nodal displacements could be computed Figure 6.5: Free-body diagrams for the nodes 2 and 4.
with the relations described in one of the following sections, see also equations (6.1.17)-(6.1.19).
The important thing to notice here is, that there are overall 6 equations to solve, in order to
compute 6 unknown quantities F1x , F1y , F3x , F3y , S1 , and S2 . This is characteristic for a statically nodes! In order to get enough equations for computing all unknown quantities, it is necessary
determinate truss, resp. system, and the other case is discussed in the following section. to use additional equations, like the ones given in equations (6.1.17)-(6.1.19). For example the
equilibrium condition of horizontal forces at node 4 is given by
X
6.1.2 A Simple Statically Indeterminate Plane Truss FH = 0 = Fx − SII cos α2 − SI cos α1 , (6.1.13)
Finally summarizing all possible and useful equations and all unknown quantities implies, that
-
x there are overall 9 unknown quantities but only 8 equations! This result implies, that it is neces-
sary to take another way to solve this problem, than using the equilibrium conditions of forces!
Figure 6.4: A simple statically indeterminate plane truss.
6.1.3 Basisc Relations for bars in a Local Coordinate System
The truss given in the sketch in £gure (6.4) is statically indeterminate, i.e. it is impossible to An arbitrary bar with its local coordinate system x̃, ỹ is given in £gure (6.6), with the nodal forces
compute all the reactive forces just with the equilibrium conditions for the forces at the different fix̃ , and fiỹ at a point i with i = 1, 2 in the local coordiante system. The following relations hold
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
166 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 167
The relations between stresses and displacements, see equation (6.1.17), are given by
µ
EA
x̃ S = Aσx̃ = ∆ux̃ , (6.1.23)
I f2ỹ l
µ
2 S inserting equation (6.1.23) in (6.1.21) implies
µ
I ỹ f2x̃
EA T
f̃ = q q u, (6.1.24)
l
bar, with A, E, l
I f1ỹ and £nally resumed as the symmetric local stiffness matrix,
1
µ 1 0 −1 0
f1x̃ EA
0 0 0 0
f̃ = K̃ u , with K̃ = . (6.1.25)
l −1 0 1 0
ª S 0 0 0 0
Figure 6.6: An arbitrary bar and its local coordinate system x̃, ỹ.
6.1.4 Basic Relations for bars in a Global Coordinate System
An arbitrary bar with its local coordinate system x̃, ỹ and a global coordinate system x, y is given
in £gure (6.7). At a point i with i = 1, 2 the nodal forces f ix̃ , fiỹ , and the nodal displacements uix̃ ,
in the local coordinate system x̃, ỹ of an arbitrary single bar, see £gure (6.6), uiỹ are de£ned in the local coordiante system. It is also possible to de£ne the nodal forces f ix , fiy ,
and the nodal displacements vix , viy in the global coordiante system. In order to combine more
S
stresses σx̃ = , (6.1.17)
A
dux̃ ∆ux̃ I ỹ
kinematics εx̃ = = , (6.1.18) µ
dx̃ l x̃
material law σx̃ = Eεx̃ . (6.1.19) 6f1y
I u1ỹ
In order to consider the additional relations given above, it is useful to combine the nodal dis- 2
placements and the nodal forces for the two nodes of the bar in the local coordinate system. The
6v1y
nodal displacements in the local coordinate system x̃ are given by I f1ỹ α
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
168 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 169
one global coordinate system. This transformation of the local vector quantities, like the nodal stiffness matrix K is assembled by summarizing relations like in equation (6.1.28)-(6.1.31) for
load vector f̃ , or the nodal displacement vector u, is given by a multiplication with the so called all used elements I-III, and all given boundary conditions, like the ones given by the supports
∗ at the nodes 1-3,
transformation matrix Q, which is an orthogonal matrix, given by
· ¸ v1x = v2x = v3x = 0, (6.1.32)
∗ ∗ cos α sin α
ã = Q a , with Q= . (6.1.26) v1y = v2y = v3y = 0. (6.1.33)
− sin α cos α
With this conditions the rigid body movement is eliminated from the system of equations, resp.
Because it is necessary to transform the local quantities for more than one node, the transforma- the assembled global stiffness matrix, than only the unknown displacements at node 4 remain,
∗
tion matrix Q is composed by one submatrix Q for every node, v4x , and v4y . (6.1.34)
∗ cos α sin α 0 0 This implies, that it is suf£cient to determine the equilibrium conditions only at node 4. The
∗
Q 0 − sin α cos α 0 0
Q= ∗ =
0
. (6.1.27) result is, that it is suf£cient to consider only one submatrix K i for every element I-III. For
0 Q 0 cos α sin α
example the complete equilibrium conditions for bar III are given by
0 0 − sin α cos α
f3x " # v3x 0
The local quantities f̃ , and u in equation (6.1.25) are replaced by the following expressions, f3y . . . . . . v3y 0
= ∗ = . (6.1.35)
f4x
. . . K 3 v4x
v4x
v1x f4y III v4y III v4y III
v1y
u=Qv , with v =
v2x ,
(6.1.28) Finally the equilibrium conditions at node 4 with summarizing the bars i = I-III is given by
v2y 3 · i ¸ · ¸ X 3 h i ·v ¸
X f4x F ∗
i = x = K i
4x
, (6.1.36)
f1x f4y Fy v4y
i=1 i=1
f1y
f̃ = Q f , with f =
. (6.1.29)
f2x and in matrix notation given by
f2y
P = K v, (6.1.37)
Inserting this relations in equation (6.1.25) and multiplying with the inverse of the transformation
with the so called compatibility conditions at node 4 given by
matrix from the left-hand side yields in the global coordiante system,
I II III
v4x = v4x = v4x , (6.1.38)
f = Q−1 K̃ Q v = K v, (6.1.30) I II III
v4y = v4y = v4y . (6.1.39)
with the symmetric local stiffness matrix given in the global coordinate system by By using this way of assembling the reduced global stiffness matrix the boundary conditions are
already implemented in every element stiffness matrix. The second way to assemble the reduced
cos2 α sin α cos α − cos2 α − sin α cos α global stiffness matrix starts with the unreduced global stiffness matrix given by
2
EA sin α cos α
sin α − sin α cos α − sin2 α i
K= . (6.1.31) f1x v1x 0
l − cos2 α − sin α cos α cos2 α sin α cos α i
f1y
v1y 0
− sin α cos α − sin2 α sin α cos α sin2 α i
f2x K 11 0 0 K 14 v2x 0
3 i
X f2y 0 K 22 0 K 24 v2y 0
i = = , (6.1.40)
6.1.5 Assembling the Global Stiffness Matrix f 0 0 K 33 K 34 v3x 0
i=1 3x
f i K 41 K 42 K 43 K v3y 0
There are two different ways of assembling the global stiffness matrix. The £rst way considers 3y
i
f4x v4x Fx
the boundary conditions at the beginning, the second one considers the boundary conditions not i
until the complete global stiffness matrix for all nodes is assembled. In the £rst way the global f4y v4y Fy
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
170 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 171
and in matrix notation given by 6.1.7 Computing the Forces in the bars
K v = P. (6.1.41) The forces in the various bars are computed by solving the relations given by the equation
(6.1.28)-(6.1.31) for each element i, resp. for each bar,
Each element resp. each bar could be described by a combination i, j of the numbers of nodes
used in this special element. For example for the bar (element) III the submatrices K 33 , K 34 , f i = K ivi. (6.1.49)
K 43 , and a part of K, like in equation (6.1.36) are implemented in the unreduced global stiffness For example for the bar III, with α = 90 ◦
, the symmetric local stiffness matrix and the nodal
matrix. After inserting the submatrices K ij for the various elements and considering the bound- displacements are given by
ary conditions given by equations (6.1.32)-(6.1.33) the reduced global stiffness matrix is given
0 0 0 0 0
by
3 EA 0 1 0 −1 0
K = , v3 =
v4x ,
(6.1.50)
P = K v, (6.1.42) h 0 0 0 0
0 −1 0 1 v4y
resp.
with sin 90◦ = 1, and cos 90◦ = 0 in equation (6.1.31). The forces in the bars are given in the
¡ √ ¢·
·
Fx
¸ ¸· ¸
EA 3 + 3 1 1 v4x global coordinate system, see equation (6.1.30), by
= , (6.1.43)
Fy 8h 1 3 v4y 0 0 f3x
4 8
= 6, 762 = f3y ,
see also equation (6.1.36) and (6.1.37) of the £rst way. But this computed reduced global stiff- f 3 = K 3v3 = √ (6.1.51)
ness matrix is not the desired result, because the nodal displacements and forces are the desired 3+ 3 0 0 f4x
−8 III −6, 762 III f4y III
quantities.
and in the local coordinate system associated to the bar III, see equation (6.1.29), by
6.1.6 Computing the Displacements f3x̃ f3y 6, 762
3 f3ỹ −f3x 0
The nodal displacements are computed by inverting the relation (6.1.43), f̃ = Q3 f 3 =
f4x̃ = f4y = −6, 762 .
(6.1.52)
v=K −1
P. (6.1.44) f4ỹ −f4x 0
The inversion of a 2 × 2-matrix is trivial and given by Comparing this result with the relation (6.1.21) implies £nally the force S III in direction of the
¡ √ ¢· ¸ bar,
1 (8h)2 EA 3 + 3 3 −1 SIII = −f3x̃ = f4x̃ = −6, 762kN , (6.1.53)
K −1 = K̂ = ¡ √ ¢2 , (6.1.45)
det K 2 3 + 3 (EA)2 8h −1 1 and for the bars I and II,
and £nally the inverse of the stiffness matrix is given by SI = 8, 56kN , and SII = 5, 17kN . (6.1.54)
· ¸ Comparing this results as a probe with the equilibirum conditions given by the equations (6.1.13)-
4h 3 −1
K −1 = ¡ √ ¢ . (6.1.46) (6.1.14), in horizontal direction,
EA 3 + 3 −1 1 X
FH = 0 = Fx − SII cos α2 − SI cos α1
The load vector P at the node 4, see £gure (6.4), is given by √
1 3
+ 6, 762 · 1 = −4, 7 · 10−3 ≈ 0,
· ¸
10 = 2, 0 − 8, 56 · − 5, 17 · (6.1.55)
P = , (6.1.47) 2 2
2
and in vertical direction,
and by inserting equations (6.1.46), and (6.1.47) in relation (6.1.44) the nodal displacements at X
node 4 are given by FV = 0 = Fy − SIII − SII sin α2 − SI sin α1
· ¸
v 4h
· ¸
28 √
v = 4x = √ ¢ . (6.1.48) 3 1
v4y
¡
EA 3 + 3 −8 = 10, 0 − 8, 56 · − 5, 17 · + 6, 762 · 0 = 1, 8 · 10−3 ≈ 0. (6.1.56)
2 2
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
172 Chapter 6. Exercises 6.1. Application of Matrix Calculus on Bars and Plane Trusses 173
6.1.8 The Principle of Virtual Work It is easy to see, that the left-hand side is very similar to the stiffness matrices, and that the right-
hand side is very similar to the load vectors in equations (6.1.36), and (6.1.37), or (6.1.42), and
This £nal section will give a small outlook on the use of the matrix calculus described above. The
(6.1.43). This simple example shows the close relations between the principle of virtual works,
virtual work 1 for a bar under a constant line load, or also called the weak form of equilibrium,
the £nite element methods, and the matrix calculus described above.
is given by Z Z
δW = − N δεx̃ dx̃ + pδux̃ dx̃ = 0. (6.1.57)
The force in normal direction of a bar, see also equations (6.1.17)-(6.1.19), is given by
N = EAεx̃ . (6.1.58)
With this relation the equation (6.1.57) is rewritten,
Z Z
δW = − εx̃T EAδεx̃ dx̃ + pδux̃ dx̃ = 0. (6.1.59)
The vectors given by the equations (6.1.22), these are the various quantities w.r.t. the local
variable x̃ in equation (6.1.57) could be described by
displacement ux̃ = qT û, (6.1.60)
strain u,x̃ = εx̃ = qT,x̃ û, (6.1.61)
virtual displacement δux̃ = qT δ û, (6.1.62)
virtual strain δu,x̃ = δεx̃ = qT,x̃ δ û, (6.1.63)
and the constant nodal values given by the vector û. The vectors q are the so called shape
functions, but in this very simple case, they include only constant values, too. In general this
shape functions are dependent of the position vector, for example in this case the local variable
x̃. This are some of the basic assumptions for £nite elements. Inserting the relations given by
(6.1.60)-(6.1.63), in equation (6.1.59) the virtual work in one element, resp. one bar, is given by
Z Z
¡ T ¢T
δW = − q,x̃ û EAqT,x̃ δ ûdx̃ + pqT δ ûdx̃ = 0. (6.1.64)
·Z ¸ ·Z ¸
= −ûT q,x̃ EAqT,x̃ dx̃ δ û + pqT dx̃ δ û = 0. (6.1.65)
These integrals just describe one element, resp. one bar, but if a summation over more elements
is introduced like this,
X3 X3 · µZ ¶ µZ ¶ ¸
¡ ¢T
δW i = − ûi q,x̃ EAqT,x̃ dx̃ δ ûi + pi qT dx̃ δ ûi = 0, (6.1.66)
i=1 i=1
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
174 Chapter 6. Exercises 6.2. Calculating a Structure with the Eigenvalue Problem 175
6.2 Calculating a Structure with the Eigenvalue Problem 6.2.2 The Equilibrium Conditions after the Excursion
In a £rst step the relations between the variables x 1 , x2 , and the reactive forces FAy , and FBy in
6.2.1 The Problem the supports are solved. For this purpose the equilibrium conditions of moments are established
Establish the homogeneous system of equations for the structure, see sketch (6.8) of rigid bars, for two subsystems, see sketch (6.9). After that in a second step the equilibrium conditions for
in order to determine the critical load Fc = Fcritical ! Assume that the structure is geometrically the whole system are established. The moment equation w.r.t. the node D of the subsystem on
linear, i.e. the angles of excursion ϕ are so small that cos ϕ = 1, and sin ϕ = 0 are good
approximations. The given values are A
-
B ¾
FAx 6 6x1 6 6 Fcritical
1 kN ?
x2
k1 = k , k2 = k
2
, k = 10
cm
, and l = 200cm. y
FAy 3
C0 SI z D 0
?3 FBy
6 2
l SIII 2
l
¾ - ¼ ¾ -
-
x
k1 k2
Figure 6.9: The free-body diagrams of the subsystems left of node C, and right of node D after
the excursion.
EJ = ∞ EJ = ∞ EJ = ∞ ¾
Fcritical the right-hand side of node D after the excursion implies
x1 x2 X 3 2Fc
MD = 0 = FBy · l + Fc · x2 ⇒ FBy = − x2 , (6.2.1)
? ? 2 3l
y
3 3
6
l 2l l
¾
2
-¾ -¾
2
- and the moment equation w.r.t. the node C of the subsystem on the left-hand side of node C after
the excursion implies with the following relation (6.2.3),
-
x X 3 2FAx 2Fc
MC = 0 = FAy · l + FAx · x1 ⇒ FAy = − x1 = − x1 . (6.2.2)
2 3l 3l
Figure 6.8: The given structure of rigid bars.
At any time, and any possible excursion, or for any possible load F c the complete system, cf.
(6.10), must satisfy the equilibrium conditions. The equilibrium condition of forces in horizontal
direction, cf. (6.10), after the excursion is given by
• Rewrite the system of equations so that the general eigenvalue problem for the critical load X
Fc is given by FH = 0 = FAx − Fc ⇒ FAx = Fc . (6.2.3)
A x = Fc B x.
The moment equation w.r.t. the node A for the complete system implies
• Transform the general eigenvalue problem into a special eigenvalue problem. X 7 3
MA = 0 = FBy · 5l + k2 x2 · l + k1 x1 · l, (6.2.4)
• Calculate the eigenvalues, i.e. the critical loads Fc , and the associated eigenvectors. 2 2
• Check if the eigenvectors are orthogonal to each other. with (6.2.1) and k2 = 12 k
• Transform the equation system in such a way, that it is possible to compute the Rayleigh 2Fc 7 3
quotient. What quantity could be estimated with the Rayleigh quotient? 0=− x2 · 5l + kx2 · l + kx1 · l,
3l 4 2
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
176 Chapter 6. Exercises 6.2. Calculating a Structure with the Eigenvalue Problem 177
- after this multiplication both terms are rewritten on one side and the vector x is factored out,
x
0 = B −1 A x − Fc 1 x,
Figure 6.10: The free-body diagram of the complete structure after the excursion.
and £nally the special eigenvalue problem is given by
0 = (C x − 1 Fc ) x , with C = B −1 A. (6.2.11)
and £nally The inverse of matrix B is assumed by
3 7 10 Fc ·
a b
¸
kx1 + kx2 = x2 . (6.2.5) B −1 = , (6.2.12)
2 4 3 l c d
The equilibrium of forces in vertical direction implies and the following relations must hold
· ¸· 10
¸ · ¸
X a b 0 1 0
FV = 0 = FAy + FBy + k2 x2 + k1 x1 , (6.2.6) B −1 B = 1 , resp. 2
3l
2 = . (6.2.13)
c d 3l 3l
0 1
This simple inversion implies
with (6.2.1), (6.2.2), k1 = k, and k2 = 12 k,
3
b = l, (6.2.14)
2
2Fc 2Fc 1 d = 0, (6.2.15)
0=− x1 − x2 + kx2 + kx1 ,
3l 3l 2 10 2 1 3
a + b = 0 ⇒ a = − b ⇒ a = − l, (6.2.16)
and £nally 3l 3l 5 10
10 2 3
1 2 Fc c + d = 1 ⇒ c = − l, (6.2.17)
kx1 + kx2 = (x1 + x2 ) . (6.2.7) 3l 3l 10
2 3 l
and £nally the inverse B −1 is given by
The relations (6.2.5), and (6.2.7) are combined in a system of equations, given by · 3 ¸
− l 3l
B −1 = 310 2 . (6.2.18)
·3 7
¸· ¸ · 10
¸· ¸
10
l 0
2
k 4
k x1 0 3l
x1
1 = Fc 2 2 , (6.2.8) The matrix C for the special eigenvalue problem is computed by the multiplication of the two
k 2
k x2 3l 3l
x2
2 × 2-matrices B −1 , and A like this,
· 3 ¸· ¸ · 21 ¸
or in matrix notation − l 3 l 32 k 74 k 9
kl 40 kl
C = B −1 A = 310 2 1 = 209 21 ,
10
l 0 k 2k 20
kl 40 kl
A · x = Fc · B · x. (6.2.9) and £nally the matrix C for the special eigenvalue problem is given by
· ¸
This equation system is a general eigenvalue problem, with the F ci being the eigenvalues, and 3 14 3
C = kl . (6.2.19)
the eigenvectors xi0 . 40 6 7
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
178 Chapter 6. Exercises 6.2. Calculating a Structure with the Eigenvalue Problem 179
6.2.4 Solving the Special Eigenvalue Problem It is possible to choose for each eigenvalue Fci the £rst component of of the associated eigenvec-
tor like this,
In order to solve the special eigenvalue problem, the characteristic equation is set up by comput-
xi01 = 1, (6.2.27)
ing the determinant of equation (6.2.11),
and after this to compute the second components of the eigenvectors,
det (C − Fc 1) = 0, (6.2.20)
C11 − Fci i C11 − Fci
xi02 = − x01 , resp. xi02 = − , (6.2.28)
resp. in complete notation, C12 C12
· 21 ¸ or
9
20
kl − Fc 40
kl
det 9 21 = 0. (6.2.21) C21 C21
20
kl 40
kl − Fc xi02 = − xi , resp. xi02 = − . (6.2.29)
C22 − Fci 01 C22 − Fci
Computing the determinant yields
Inserting the £rst eigenvalue F c1 in equation (6.2.28) implies the second component x102 of the
µ ¶µ ¶
21 21 81 2 2 £rst eigenvector,
kl − Fc kl − Fc − k l = 0,
20 40 800 21
kl − 65 kl 2
441 2 2 21 21 81 2 2 x102 = − 20 9 ⇒ x102 = , (6.2.30)
k l − klFc − klFc + Fc2 − k l = 0, 40
kl 3
800 20 40 800
and £nally implies the quadratic equation, and for the second eigenvalue Fc2 the second component x202 of the second eigenvector is given
by
360 2 2 63
k l − klFc + Fc2 = 0. (6.2.22) 21
kl − 38 kl
800 40 x202 = − 20 9 ⇒ x202 = −3. (6.2.31)
40
kl
Solving this simple quadratic equation is no problem,
sµ ¶ With this results the eigenvectors are £nally given by
2
63 63 k 2 l2 360 2 2 · ¸
1
· ¸
1
Fc1/2 = kl ± − k l , x10 = 2 , and x20 = . (6.2.32)
80 40 4 600 −3
r 3
63 1089
Fc1/2 = kl ± kl,
80 6400 6.2.5 Orthogonal Vectors
63 33
Fc1/2 = kl ± kl, (6.2.23) It is suf£cient to compute the scalar product of two arbitrary vectors, in order to check, if this
80 80
two vectors are orthogonal to each other, i.e.
and £nally implies the two real eigenvalues,
x1 ⊥x2 , resp. x1 · x2 = 0. (6.2.33)
6 3
Fc1 = kl = 2400kN , and Fc2 = kl = 750kN . (6.2.24) In this special case the scalar product of the two eigenvectors is given by
5 8
· ¸ · ¸
The eigenvectors x10 , x20 are computed by inserting the eigenvalues Fc1 , and Fc2 in equation 1 1
x10 · x20 = 2 · x20 = = 1 − 2 = −1 6= 0, (6.2.34)
(6.2.11), given by 3
−3
(C − Fci 1) xi0 = 0, (6.2.25) i.e. the eigenvectors are not orthogonal to each other. The eigenvectors for different eigenvalues
are only orthogonal, if the matrix C of the special eigenvalue problem is symmetric 2 . If the
and in complete notation by matrix of a special eigenvalue problem is symmetric all eigenvalues are real, and all eigenvectors
are orthogonal. In this case all eigenvalues Fc1 , Fc2 are real, but the matrix C is not symmetric,
· ¸· ¸ · ¸
C11 − Fci C12 xi01 0 and for that reason the eigenvectors x10 , x20 are not orthogonal.
= . (6.2.26)
C21 C22 − Fci xi02 0 2
See script, section about matrix eigenvalue problems
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
180 Chapter 6. Exercises 6.2. Calculating a Structure with the Eigenvalue Problem 181
6.2.6 Transformation than the whole equation is multiplied with E −1 from the left-hand side,
¡ −1 ¢
The special eigenvalue problem (6.2.11) includes an anti-symmetric matrix C. In order to deter- E D E −1 − Fc E −1 E T E E −1 E q = 0, (6.2.43)
mine the Rayleigh quotient it is necessary to have a symmetric matrix in the special eigenvalue
problem 3 , and with the relation E = E T for symmetric matrices,
¡ −1 ¢
(C − Fc 1) x = 0, (6.2.35) E D E −1 − Fc 1 1 E q = 0, (6.2.44)
With relation (6.2.41) the £rst term in equation (6.2.44) describes a congruence transformation,
and in detail so that the matrix F keeps symmetric 4 ,
½· 21 9
¸ · ¸¾ · ¸ · ¸ ¡ ¢T
kl kl 1 0 x1 0 F = E −1 D E −1 = E −1 D E −1 , (6.2.45)
20
9
40
21 − Fc = . (6.2.36)
20
kl 40
kl 0 1 x2 0
and with the matrix F given by
The £rst step is to transform the matrix C into a symmetric matrix. Because the matrices are · ¸ · 21 ¸· ¸ " 21 #
9 9√
such simple it is easy to see, that if the second column of the matrix is multiplied with 2, the 1 √ 0 kl kl 1 0 kl kl
F = 1
20
9
20
21 1
√ = 209√
20 2
21 (6.2.46)
matrix becomes symmetric, 0 2 2 20 kl 20
kl 0 2
2 20 2
kl 40
kl.
½· 21 9
¸ · ¸¾ · ¸ · ¸
kl 20 kl 1 0 x1 0 Furthermore a new vector p is de£ned by
20
9 − F c = , (6.2.37)
20
kl 2120
kl 0 2 1
x
2 2
0 · ¸· ¸ · ¸
1 √0 x1 x1
√
p = Eq = 1 ⇒ p= 1 . (6.2.47)
and in matrix notation with the new de£ned matrices D, and E 2 , and a new vector q 0 2 2 x2 2
2x2
¡ ¢ Finally combining this results implies a special eigenvalue problem with a symmetric matrix F
D − Fc E 2 q = 0. (6.2.38) and a vector p,
(F − Fc 1) p = 0, (6.2.48)
Because the matrix E 2 = E T E is a diagonal and symmetric matrix, the matrices E, and E T are
Computing the characteristic equation like in equations (6.2.20), and (6.2.21) yields
diagonal and symmetric matrices, too,
· ¸ · ¸ det (F − Fc 1) = 0, (6.2.49)
1 0 1 √0
E2 = ⇒ E = ET = . (6.2.39)
0 2 0 2 resp. in complete notation,
" #
−1 21 9√
Because the matrix E is a diagonal and symmetric matrix, the inverse E is a diagonal and 20
kl − Fc 20 2
kl
det 9√ 21 = 0, (6.2.50)
symmetric matrix, too, 20 2
kl 40
kl − Fc
· ¸ · ¸· ¸ · ¸
a b a b 1 √0 1 0 and this £nally implies the same characteristic equation like in (6.2.22),
E −1 = ⇒ E −1 E = = = 1, (6.2.40)
c d c d 0 2 0 1 µ ¶µ ¶
21 21 81 2 2
kl − Fc kl − Fc − k l = 0. (6.2.51)
and this implies £nally 20 40 800
· ¸ Having the same characteristic equation implies, that this problem has the same eigenvalues, i.e.
1 0
√
¡ ¢T
E −1 = 1 , and E −1 = E −1 . (6.2.41) it is the same eigenvalue problem, but just another notation. With this symmetric eigenvalue
0 2
2 problem it is possible to compute the Rayleigh quotient,
The equation (6.2.38) is again a general eigenvalue problem, but now with a symmetric matrix h i pT p pT F p
D. But in order to compute the Rayleigh quotient, it is necessary to set up a special eigenvalue Λ1 = R pν = ν T ν+1 = ν T ν , with Λ1 ≤ Fc1 , (6.2.52)
p ν pν pν pν
problem again. In the next step the identity 1 = E −1 E is inserted in equation (6.2.38), like this,
¡ ¢ with an approximated vector pν . The Rayleigh quotient Λ1 is a good approximation of a lower
D − Fc E 2 E −1 E q = 0, (6.2.42) bound for the dominant eigenvalue.
3 4
See script, section about matrix eigenvalue problems See script, section about the charateristics of congruence transformations.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
182 Chapter 6. Exercises 6.3. Fundamentals of Tensors in Index Notation 183
Like in the lectures a tensor could be described by a coef£cient matrix f ij , and a basis given by because a matrix with exchanged columns and rows is the transpose of the matrix. As a short
ϕij . First do not look at the basis, just look at the coef£cient matrices. In this exercise some recap the product of a square matrix and a column matrix, resp. a (column) vector, is given by
of the most important rules to deal with the coef£cient matrices are recapitulated. The Einstein 1
summation convention implies for the coef£cient matrix A .ji , and an arbitrary basis ϕi.j , A11 A12 A13 u A11 u1 + A12 u2 + A13 u3
A u = v ⇔ A21 A22 A23 u2 = A21 u1 + A22 u2 + A23 u3 ⇔ Aij uj = vi .
3 X
3
X A31 A32 A33 u3 A31 u1 + A32 u2 + A33 u3
A.ji ϕi.j = A.ji ϕi.j ,
i=1 j=1
For example some products of coef£cient matrices in index and matrix notation,
with the coef£cient matrix A given in matrix notation by ¡ ¢T
.1 Aij B kj Ckl = Aij B jk Ckl = Dil ⇔ A B T C = D,
A1 A.2 A.3 ¡ ¢T ¡ lm ¢T
£ .j ¤ 1 1 A.ji Bkj Cl.k Dml = A.ji (Bjk )T C.lk D = Ei.m ⇔ A B T C T DT = E,
A = Ai = A.1 2 A.2
2 A.3
2
. ¡ ¢ T
A.1
3 A .2
3 A .3
3
Aij B kj uk = Aij B jk uk = vi ⇔ A B T u = v,
ui B ij uj = α ⇔ uT B v = α.
The dot in the superscript index of the expression A.ji shows which one of the indices is the £rst
index, and which one is the second index. This dot represents an empty space, so in this case it is Furthermore it is important to notice, that the dummy indices could be renamed arbitrarily,
easy to see, that the subscript index i is the £rst index, and the superscript index j is the second
index, i.e. the index i is the row index, and the j is the column index of the coef£cient matrix! Aij B kj = Aim B km , or Akl vl = Akj vj , etc.
This is important to know for the multiplication of coef£cient matrices. For example, what is is
the difference between the following products of coef£cient matrices,
6.3.2 The Kronecker Delta and the Trace of a Matrix
Aij B jk = ? , and Aij B kj = ? .
The Kronecker delta is de£ned by
The left-hand side of £gure (6.11) sketches the £rst product, and the right-hand side the second (
j 1 , iff i=j
- δji = δji = δ ij = δij = .
0 , iff i 6= j
Aij B jk =? B 11 B 12 B 13 Aij B kj =? B 11 B 12 B 13
j B 21 B 22 B 23 B 21 B 22 B 23 The Kronecker deltas δij , and δ ij are only de£ned in a Cartesian basis, where they represent the
j B 31 B 32 B 33 j B 31 B 32 B 33 metric coef£cients. The other ones are de£ned in every basis, and in order to use the summation
?
- - convention, it is useful to prefer this notation with super- and subscript indices. Because the
Kronecker delta is the symmetric identity matrix, it is not necessary to differentiate the column
A11 A12 A13 C1.1 C1.2 C1.3 A11 A12 A13 D1.1 D1.2 D1.3 and row indices in index notation. As a rule of thumb, multiplication with a Kronecker delta
A21 A22 A23 C2.1 C2.2 C2.3 A21 A22 A23 D2.1 D2.2 D2.3 substitues an index in the same position,
A31 A32 A33 C3.1 C3.2 C3.3 A31 A32 A33 D3.1 D3.2 D3.3
vk δjk = vj ⇔ v I = v,
j .j
Figure 6.11: Matrix multiplication. A.k
i δk = A i ⇔ A I = A,
A δi δm = Ask
im s k
⇔ A I I = A.
product. This implies the following important relations,
£ ¤ £ ¤ But what is described by
Aij B jk = Ci.k ⇔ [Aij ] B jk = Ci.k ⇔ A B = C, A.lk δil δki = A.ii ?
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
184 Chapter 6. Exercises 6.3. Fundamentals of Tensors in Index Notation 185
This is the sum over all main diagonal elements, or just called the trace of the matrix, Something similar for the coef£cients of a vector x is given by
£ ¤
tr A = A11 + A22 + A33 = tr Aji , x = x i gi = x i g i ⇔ xi gij gj = xi gi ⇔ xi gij = xj .
and because the trace is a scalar quantity, it is independent of the basis, i.e. it is an invariant, The
£ ¤ h i gki = gik are called the covariant metric coef£cients,
tr A = tr Aji = tr Ãji , i.e. Aii = Ãii .
and the
g ki = g ik are called the contravariant metric coef£cients.
For example in the 2-dimensional vector space E2 the Kronecker delta is de£ned by
Finally this implies for the base vectors and for the coef£cients or coordinates of vectors and
g2 O º g2 tensors, too. This implies, raising an index with the contravariant metric coef£cients is given by
than an arbitrary co- and contravariant basis is given by Than the determinants of the co- and contravariant metric coef£cients are de£ned by
g1 · g 2 =0 ⇔ g1 ⊥g2 , £ ¤ 1
det [gik ] = g , and det g ik = .
g2 · g 1 =0 ⇔ g2 ⊥g1 , g
g1 · g 1 = 1,
6.3.4 Permutation Symbols
g2 · g 2 = 1.
The cross products of the Cartesian base vectors ei in the 3-dimensional Euclidean vector space
E3 are given by
6.3.3 Raising and Lowering of an Index
If the vectors gi and gk are in the same space V, it must be possible to describe g k by a product e1 × e 2 = e 3 = e 3 , e2 × e3 = e1 = e1 , and e 3 × e1 = e2 = e2 ,
of gi and some coef£cient like A km , e2 × e1 = −e3 = −e3 , e3 × e2 = −e1 = −e1 , and e1 × e3 = −e2 = −e2 .
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
186 Chapter 6. Exercises 6.3. Fundamentals of Tensors in Index Notation 187
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
188 Chapter 6. Exercises 6.3. Fundamentals of Tensors in Index Notation 189
⇔ (a · b) c − (a · c) b = v
4. Combine the base vectors of a general basis and simplify the expressions in index notation.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
190 Chapter 6. Exercises 6.4. Various Products of Second Order Tensors 191
6.4.2 The Tensor Product of Two Second Order Tensors 2. Compute the scalar products.
¡ ¢
The tensor product of two second order tensors is computed by the scalar product of the two (a) T : S = (Tij gi ⊗ gj ) : Skl gk ⊗ gl =
inner base vectors and the dyadic product of the two outer base vectors. For example a tensor ¡ ¢
(b) T : S = (Tij gi ⊗ gj ) : S kl gk ⊗ gl =
product is given by ¡ ¢ ¡ ¢
(c) T : S = Ti.j gi ⊗ gj : Sk.l gk ⊗ gl =
¡ dyadic
¢¡ product q ¢ ¡ ¢
p
R = TS = T ij gi ⊗gj S kl gk ⊗ gl (d) T : 1 = (Tij gi ⊗ gj ) : δkl gk ⊗ gl =
x scalar y ¡ ¢ ¡ ¢
product (e) 1 : 1 = δij gi ⊗ gj : δkl gk ⊗ gl =
¡ ¢
= T ij S kl (gj · gk ) (gi ⊗ gl ) (f) T : g = (Tij gi ⊗ gj ) : gkl gk ⊗ gl =
= T ij S kl gjk gi ⊗ gl
¡ ¢
(g) T : T = (Tij gi ⊗ gj ) : Tkl gk ⊗ gl =
= T ij Sj.l gi ⊗ gl ¡ ¢T
(h) T : TT = (Tij gi ⊗ gj ) : Tkl gk ⊗ gl =
R = TS = Ril gi ⊗ gl , with Ril = T ij Sj.l .
3. Compute the various products.
¡ ¢¡ ¢
6.4.3 The Scalar Product of Two Second Order Tensors (a) (TS) v = TSv = Ti.j gi ⊗ gj Sk.l gk ⊗ gl (vm gm ) =
¡ ¢ ¡ ¢
The scalar product of two second order tensors is computed by the scalar product of the £rst base (b) (T : S) v = T : Sv = Ti.j gi ⊗ gj : Sk.l gk ⊗ gl (vm gm ) =
vectors of the two tensors and the scalar product of the two second base vectors of the tensors, ¡ ¢ ¡ ¢ ¡h ¢ i
too. For example a scalar product is given by (c) tr TTT = δij gi ⊗ gj : Tkl gk ⊗ gl (Tmn gm ⊗ gn )T =
scalar product
¡ p ¢ ¡ q ¢
α = T : S = T ij gi ⊗ gj : S kl gk ⊗ gl
xscalar product y
ij kl
= T S (gi · gk ) (gj · gl )
= T ij S kl gik gjl
α = T : S = T ij Sij .
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
192 Chapter 6. Exercises 6.4. Various Products of Second Order Tensors 193
¡ ¢ ¡ ¢
6.4.5 Solutions (b) (T : S) v = T : Sv = Ti.j gi ⊗ gj : Sk.l gk ⊗ gl (vm gm )
.j .l ik m kj m
= Ti Sk g gjl vm g = T Skj vm g = αv = w
1. Compute the tensor products.
¡ ¢ ¡ ¢ h¡ ¢ i
¡ ¢ (c) tr TTT = δij gi ⊗ gj : Tkl gk ⊗ gl (Tmn gm ⊗ gn )T
(a) TS = (Tij gi ⊗ gj ) Skl gk ⊗ gl ¡ ¢ £ ¤
j i
= Tij Skl g g ⊗ g = Tij S.l g ⊗ gl = Ril gi ⊗ gl = R
jk i l = δij gi ⊗ gj : Tkl Tm.l gk ⊗ gm = δij Tkl Tm.l g ik δjm = δij T.li Tj.l = T.lj Tj.l = T : T
¡ ¢
(b) TS = (Tij gi ⊗ gj ) S kl gk ⊗ gl = Tij S kl δkj gi ⊗gl = Tij S jl gi ⊗gl = Ri.l gi ⊗gl = R
¡ .j i ¢ ¡ .l k ¢
(c) TS = Ti g ⊗ gj Sk g ⊗ gl = Ti.j Sk.l δjk gi ⊗gl = Ti.j Sj.l gi ⊗gl = Ri.l gi ⊗gl = R
¡ ¢
(d) TS = (Tij gi ⊗ gj ) Sk.l gk ⊗ gl = Tij Sk.l g jk gi ⊗gl = Tij S jl gi ⊗gl = Ri.l gi ⊗gl = R
¡ ¢
(e) T1 = (Tij gi ⊗ gj ) δkl gk ⊗ gl = Tij δkl g jk gi ⊗ gl = Tij g jl gi ⊗ gl = Tij gi ⊗ gj = T
¡ ¢¡ ¢
(f) 11 = δij gi ⊗ gj δkl gk ⊗ gl = δij δkl δjk gi ⊗ gl = δil gi ⊗ gl = δij gi ⊗ gj = 1
¡ ¢
(g) Tg = (Tij gi ⊗ gj ) gkl gk ⊗ gl = Tij gkl g jk gi ⊗gl = Tij δlj gi ⊗gl = Tij gi ⊗gj = T
¡ ¢
(h) TT = (Tij gi ⊗ gj ) Tkl gk ⊗ gl = Tij Tkl g jk gi ⊗ gl = Tij T.lj gi ⊗ gl = T2
¡ ¢T
(i) TTT = (Tij gi ⊗ gj ) Tkl gk ⊗ gl
¡ ¢
= (Tij gi ⊗ gj ) Tkl gl ⊗ gk = Tij Tkl g jl gi ⊗ gk = Tij Tk.j gi ⊗ gk = Tij Tl.j gi ⊗ gl , or
¡ ¢T
TTT = (Tij gi ⊗ gj ) Tkl gk ⊗ gl
¡ ¢
= (Tij g ⊗ g ) Tlk g ⊗ g = Tij Tlk g jk gi ⊗ gl = Tij Tl.j gi ⊗ gl
i j k l
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
194 Chapter 6. Exercises 6.5. Deformation Mappings 195
(b) F−1 −T
Θ FΘ =
FX := GradX ϕ = gi ⊗ Gi , (6.5.1)
(c) FX FTX =
the local geometry gradient KΘ is given by
(d) F−1 −T
X FX =
KΘ := GRADΘ ψ̃ = Gi ⊗ Zi , (6.5.2)
(e) 1X 1TX =
and the local deformation gradient FΘ is given by
(f) 1x 1Tx =
FΘ := GRADΘ ϕ̃ = gi ⊗ Zi . (6.5.3)
2. Compute the tensor products in index notation, and name the result with the correct name.
The different tangent mappings of the various tangent mappings are given by
(a) K−T −1
Θ MΘ K Θ =
FX = g i ⊗ Gi , FTX = Gi ⊗ gi , F−1
X = Gi ⊗ g
i
, F−T i
X = g ⊗ Gi , (6.5.4)
(b) KTΘ MX KΘ =
KΘ = G i ⊗ Zi , KTΘ = Zi ⊗ Gi , K−1
Θ = Zi ⊗ G
i
, K−T i
Θ = G ⊗ Zi , (6.5.5)
FΘ = g i ⊗ Zi FTΘ = Zi ⊗ gi , F−1 i
, F−T = g i ⊗ Zi . (c) F−T −1
Θ mΘ F Θ =
, Θ = Zi ⊗ g Θ (6.5.6)
(d) FTΘ mx FΘ =
The identity tensors are introduced separately for the various coordinate system by
(e) F−T −1
X EX F X =
identity tensor of the parameter space - 1Θ := Zi ⊗ Zi , (6.5.7)
(f) FTX Ex FX =
identity tensor of the undeformed space - 1X := Gi ⊗ Gi , (6.5.8)
identity tensor of the deformed space - 1x := gi ⊗ gi . (6.5.9) 3. Compute the tensor and scalar products in index notation. Rewrite the results in tensor
notation.
The various metric tensors of the different tangent spaces are introduced by
local metric tensor of the undeformed body - MΘ = KTΘ KΘ = Gij Zi ⊗ Zj , (6.5.10) (a) BΘ = M−1
Θ mΘ =
local metric tensor of the deformed body - mΘ = FTΘ FΘ = gij Zi ⊗ Zj , (6.5.11) (b) BX =
material metric tensor of the undeformed body - MX = 1TX 1X = Gij Gi ⊗ Gj , (6.5.12) (c) Bx =
material metric tensor of the deformed body - mX = FTX FX i j
= gij G ⊗ G , (6.5.13) (d) BΘ : 1Θ =
spatial metric tensor of the undeformed body - Mx = F−T
X FX
−1
= Gij gi ⊗ gj , (6.5.14) (e) BTΘ : BΘ =
spatial metric tensor of the undeformed body - mx = T
1 x 1x i
= gij g ⊗ g j
. (6.5.15) (f) BTΘ BTΘ : BΘ =
The local strain tensors is given by
1 1
EΘ := (mΘ − MΘ ) = (gij − Gij ) Zi ⊗ Zj , (6.5.16)
2 2
the material strain tensors is given by
1 1
EX := (mX − MX ) = (gij − Gij ) Gi ⊗ Gj , (6.5.17)
2 2
and £nally the spatial strain tensors is given by
1 1
Ex := (mx − Mx ) = (gij − Gij ) gi ⊗ gj . (6.5.18)
2 2
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
196 Chapter 6. Exercises 6.5. Deformation Mappings 197
2. Compute the tensor products in index notation, and name the result with the correct name.
(a) K−T −1
¡ MΘ K Θ ¢ =
Θ ¡ ¢ ¡ ¢
= Gk ⊗ Zk (Gij Zi ⊗ Zj ) Zl ⊗ Gl = Gij δki δlj Gk ⊗ Gl = Gij Gi ⊗Gj = MX
(b) KTΘ¡MX KΘ =
¢ ¡ ¢ ¡ ¢
= Zk ⊗ Gk (Gij Gi ⊗ Gj ) Gl ⊗ Zl = Gij δki δlj Zk ⊗ Zl = Gij Zi ⊗ Zj = MΘ
(c) F−T −1
Θ¡ mΘ FΘ ¢= ¡ ¢ ¡ ¢
= gk ⊗ Zk (gij Zi ⊗ Zj ) Zl ⊗ gl = gij δki δlj gk ⊗ gl = gij gi ⊗ gj = mx
(d) FTΘ¡mx FΘ =¢ ¡ ¢ ¡ ¢
= Zk ⊗ gk (gij gi ⊗ gj ) gl ⊗ Zl = gij δki δlj Zk ⊗ Zl = gij Zi ⊗ Zj = mΘ
(e) F−T −1
X¡ EX FX ¢ = ¡ ¢ ¡ ¢
= gk ⊗ Gk (gij − Gij ) (Gi ⊗ Gj ) Gl ⊗ gl = (gij − Gij ) δki δlj gk ⊗ gl
i j
= (gij − Gij ) g ⊗ g = Ex
(f) FTX¡Ex FX = ¢ ¡ ¢ ¡ ¢
= Gk ⊗ gk (gij − Gij ) (gi ⊗ gj ) gl ⊗ Gl = (gij − Gij ) δki δlj Gk ⊗ Gl
= (gij − Gij ) Gi ⊗ Gj = EX
3. Compute the tensor and scalar products in index notation. Rewrite the results in tensor
notation.
(a) BΘ = M−1 Θ mΘ = ¡ ¢
= (Gij Zi ⊗ Zj ) glk Zl ⊗ Zk = Gij gkl δjl Zi ⊗ Zl = Gij gjk Zi ⊗ Zk
(b) BX = M−1 X mX = ¡ ¢
= (Gij Gi ⊗ Gj ) glk Gl ⊗ Gk = Gij gkl δjl Gi ⊗ Gl = Gij gjk Gi ⊗ Gk
(c) Bx = M−1 x mx = ¡ ¢
= (Gij gi ⊗ gj ) glk gl ⊗ gk = Gij gkl δjl gi ⊗ gl = Gij gjk gi ⊗ gk
(d) BΘ¡ : 1Θ = ¢ ¡ ¢
= Gij gjk Zi ⊗ Zk : Zl ⊗ Zl = Gij gjk Zil Z kl = Gij gjk δik = Gij gji = tr BΘ
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
198 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 199
6.6 The Moving Trihedron, Derivatives and Space Curves 6.6.2 The Base vectors
The winding up of the spiral staircase is given by the sketch in £gure (6.14). With the Pythagoras
6.6.1 The Problem
A spiral staircase is given by the sketch in £gure (6.13). The relation between the gradient angle 6
Θ1 angle between zero and a half rotation
¾ r - πr
R 0 ≤ Θ1 ≤ cos α
e1 ¾ e3
top, height h/2 bottom, £xed support
ϕR h
ϕ = aϕ
2π
Θ1 ?e2
R
α
? ?
rϕ
¾ -
Figure 6.13: The given spiral staircase. Figure 6.14: The winding up of the given spiral staircase.
α and the overall height h of the spiral staircase is given by theorem, see also the sketch in £gure (6.14), the relationship between the variable ϕ and the
h variable Θ1 along the central line is given by
tan α = ,
2πr p h
if h is the height of a 360 spiral staircase, here the spiral staircase is just about 180o . The spiral
o Θ1 = a 2 ϕ2 + r 2 ϕ2 , and a = .
2π
staircase has a £xed support at the bottom, and the central line is represented by the variable Θ 1 ,
which starts at the top of the spiral staircase. This implies
1
• Compute the tangent t, the normal n, and the binormal vector b w.r.t. the variable ϕ. ϕ= √ Θ1 ,
a2 + r 2
1 1
• Determine the curvature κ = ρ
and the torsion ω = τ
of the curve w.r.t. to the variable ϕ. and with the de£nition of the cosine
• Compute the Christoffel symbols rϕ r
cos α = =√ ,
Θ1 a2 + r 2
Γi1r , for i, r = 1, 2, 3.
£nally the relationship between the variables ϕ, and Θ 1 is given by
• Describe the forces and moments in a sectional area w.r.t. to the basis given by the moving
trihedron, with the following conditions, cos α 1 cos α 1
ϕ= Θ = cΘ1 , with c= =√ .
M = M ai i i
, resp. N = N ai , with M = M̂i (ϕ) i
, resp. i
N = N̂i (ϕ) r r r 2 + a2
{a1 , a2 , a3 } = {t, n, b} . With this relation it is easy to see, that every expression depending on ϕ is also depending on Θ 1 ,
this is later on important for computing some derivatives. Any arbitrary point on the central line
• Compute the resulting forces and moments at an sectional area given by ϕ = 130 o . Con-
of the spiral staircase could be represented by a vector of position x in the Cartesian coordinate
sider a load vector given in the global Cartesian coordinate system by
system, ¡ ¢
R̄ = −qϕr e3 , x = xi ei , and xi = x̂i Θ1 ,
at a point S given by the angle ϕ2 , and the radius rS . This load maybe a combination of the or
self-weight of the spiral staircase and the payload of its usage. x = x i ei , and xi = x̂i (r, ϕ) .
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
200 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 201
The three components of the vector of position x are given by The binormal vector b = a3 is de£ned by
dx ∂x ∂ϕ
q √
t = a1 = = · , |b| = c a2 sin2 ϕ + a2 cos2 ϕ + r2 = c a2 + r2 = 1,
dΘ1 ∂ϕ ∂Θ1
dϕ and with this the binormal vector b is already an unit vector given by
and with dΘ1
= c this implies
−h sin ϕ −ca sin ϕ
−r sin ϕ −cr sin ϕ c
b = a3 = h cos ϕ ⇒ b = a2 = ca cos ϕ .
t = a1 = r cos ϕ c ⇒ t = a1 = cr cos ϕ . 2π
h
2πr cr
− 2π −ca
The absolute value of this vector is given by 6.6.3 The Curvature and the Torsion
q √ The curvature κ = 1
of a curve in space is given by
|t| = |a1 | = c r2 sin2 ϕ + r2 cos2 ϕ + a2 = c r2 + a2 = 1, ρ
i.e. the tangent vector t is already an unit vector! For the normal unit vector n = a 2 £rst the 1 dt da1 d2 x
κ= = |n∗ | , and n∗ = 1
= 1
= 2 1,
normal vector n∗ is computed by ρ dΘ dΘ dΘ
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
202 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 203
6.6.4 The Christoffel Symbols sin ϕ −cr sin ϕ
Γ211 = a2,1 · a1 = c − cos ϕ · cr cos ϕ = −c2 r,
The Christoffel symbols are de£ned by the derivatives of the base vectors, here the moving 0 −ca
trihedron given by {a1 , a2 , a3 } = {t, n, b},
sin ϕ − cos ϕ
2
ai,1 = Γi1r ar | ·ak Γ21 = a2,1 · a2 = c − cos ϕ · − sin ϕ = 0,
k 0 0
ai,1 · a = Γi1r ar · a = Γi1r δrk
k
sin ϕ −ca sin ϕ
and £nally the de£nition of a single Christoffel symbol is given by Γ21 = a2,1 · a3 = c − cos ϕ · ca cos ϕ = −c2 a,
3
0 cr
Γi1k = ai,1 · ak .
−c2 a cos ϕ −cr sin ϕ
This de£nition implies, that it is only necessary to compute the scalar products of the base vectors Γ311 = a3,1 · a1 = −c2 a sin ϕ · cr cos ϕ = 0,
and their £rst derivatives, in order to determine the Christoffel symbols. For this reason in a £rst 0 −ca
step all the £rst derivatives w.r.t. the variable ϕ of the base vectors of the moving trihedron are 2
−c a cos ϕ − cos ϕ
determined. The £rst derivative of the base vector a 1 is given by
Γ312 = a3,1 · a2 = −c a sin ϕ · − sin ϕ = c2 a,
2
−cr cos ϕ
− cos ϕ
0 0
∂a1 ∂ϕ 2
a1,1 = · = −cr sin ϕ c = c r − sin ϕ = c2 ra2 ,
2 −c a cos ϕ −ca sin ϕ
∂ϕ ∂Θ1 Γ313 2
= a3,1 · a3 = −c a sin ϕ · ca cos ϕ = 0.
0 0
0 cr
the £rst derivative of the second base vector a 2 is given by
With this results the coef£cient matrix of the Christoffel symbols could be represented by
sin ϕ
∂a2 ∂ϕ 0 c2 r 0
a2,1 = · = c − cos ϕ ,
∂ϕ ∂Θ1
0 [Γi1r ] = −c2 r 0 −c2 a .
0 c2 a 0
and £nally the £rst derivative of the third base vector a 3 is given by
6.6.5 Forces and Moments at an Arbitrary sectional area
−ca cos ϕ − cos ϕ
∂a3 ∂ϕ An arbitrary line element of the spiral staircase is given by the points P , and Q. These points
a3,1 = · = c −ca sin ϕ = c a − sin ϕ = c2 aa2 .
2
∂ϕ ∂Θ1 are represented by the vectors of position x, and x + dx. At the point P the moving trihedron is
0 0
given by the orthonormal base vectors t, n, and b. The forces −N, N + dN, and the moments
Because the moving trihedron ai is an orthonormal basis, it is not necessary to differentiate −M, M+dM at the sectional areas are given like in the sketch of £gure (6.15). The line element
between the co- and contravariant base vectors, i.e. ai = ai , and with this the de£nition of the is load by a vector f dΘ1 . The equilibrium of forces in vector notation is given by
Christoffel symbols is given by
N + dN − N + f dΘ1 = 0 ⇒ dN + f dΘ1 = 0,
Γi1k k
= ai,1 · a = ai,1 · ak .
with the £rst derivative of the force vector w.r.t. to the variable Θ 1 represented by
The various Christoffel symbols are computed like this, ∂N
dN = dΘ1 = N,1 dΘ1 ,
Γ111 2
= a1,1 · a1 = c ra2 · a1 = 0, ∂Θ1
Γ112 = a1,1 · a2 = c2 ra2 · a2 = c2 r, the equilibrium condition becomes
Γ113 = a1,1 · a3 = c2 ra2 · a3 = 0, (N,1 + f ) dΘ1 = 0.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
204 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 205
This equation system represents three component equations, one for each direction of the basis
of the moving trihedron, £rst for i = 1, and k = 1, . . . , 3,
I ±
−M
b = a3 (Θ1 ) N,11 + N 1 Γ111 + N 2 Γ211 + N 3 Γ311 + f 1 = 0,
* with the computed values for the Christoffel symbols from above,
n = a2 (Θ1 )
- ¡ ¢
Θ1 N,11 + 0 + N 2 −c2 r + 0 + f 1 = 0,
µ
−N P t = a1 (Θ1 ) and £nally
) q
1
dx = a1 dΘ1 q
f dΘ N,11 − c2 rN 2 + f 1 = 0.
x = x (Θ1 ) R
1
e3 6 Q The second case for i = 2, and k = 1, . . . , 3 implies
N + dN
q
N,12 + N 1 Γ112 + N 2 Γ212 + N 3 Γ312 + f 2 = 0,
x-+ dx = x (Θ1 ) + a1 dΘ1 ¡ ¢ ¡ ¢
N,12 + N 1 c2 r + 0 + N 3 c2 a + f 2 = 0,
O e2 M + dM
e1 N,12 + c2 rN 1 + c2 aN 3 + f 2 = 0.
ª R
The third case for i = 3, and k = 1, . . . , 3 implies
Figure 6.15: An arbitrary line element with the forces, and moments in its sectional areas.
N,13 + N 1 Γ113 + N 2 Γ213 + N 3 Γ313 + f 3 = 0,
¡ ¢
N,13 + 0 + N 2 −c2 a + 0 + f 3 = 0,
This equation rewritten in index notation, with the force vector N = N i ai given in the basis of
N,13 − c2 aN 2 + f 3 = 0.
the moving trihedron at point P , implies
³¡ ¢ ´ All together the coef£cient scheme of the equilibrium of forces in the basis of the moving trihe-
N i ai ,1 + f i ai dΘ1 = 0, dron is given by
N,11 − c2 rN 2 + f 1 0
with the chain rule, N,12 + c2 rN 1 + c2 aN 3 + f 2 = 0 .
N,13 − c2 aN 2 + f 3 0
N,1i ai + N i ai,1 + f i ai = 0 The equilibrium of moments in vector notation w.r.t. the point P is given by
N,1 ai + N i Γi1k ak
i
+ f i ai = 0 1
−M + M + dM + a1 dΘ1 × (N + dN) + a1 dΘ1 × f dΘ1 = 0,
2
and after renaiming the dummy indices, 1
⇒ dM + a1 dΘ1 × N + a1 × dNdΘ1 + a1 × f dΘ1 dΘ1 = 0,
¡ ¢ 2
N,1i + N k Γk1i ai + f i ai = 0.
and in linear theory, i.e. neglecting the higher order terms, e.g. terms with dNdΘ 1 , and with
With the covariant derivative de£ned by dΘ1 dΘ1 , the equilibrium of moments is given by
the equilibrium condition could be rewritten in index notation only for the components, With the £rst derivative of the moment vector w.r.t. to the variable Θ 1 given by
∂M 1
N i |1 +f i = N,1i + N k Γk1i + f i = 0. dM = dΘ = M,1 dΘ1 ,
∂Θ1
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
206 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 207
the equilibrium condition becomes 6.6.6 Forces and Moments for the Given Load
1
(M,1 + a1 × N) dΘ = 0. The unkown forces N and moments M w.r.t. the natural basis a i should be computed, i.e.
The cross product of the £rst base vector a 1 of the moving trihedron and the force vector N is
given by N = N i ai , and M = M i ai .
¡ ¢
a1 × N = a 1 × N i ai = N 1 a1 × a 1 + N 2 a1 × a 2 + N 3 a1 × a 3 The load f is given in the global, Cartesian coordinate system by
¡ ¢
= 0 + N 2 a3 + N 3 −a2 = N 2 a3 − N 3 a2 ,
a1 × N = N 2 a3 − N 3 a 2 , f = f¯i ei ,
because the base vectors ai form an orthonormal basis. The following steps are the same as the and is acting at a point S, given by an angle ϕ2 , and a radius rS , see also the free-body dia-
ones for the force equilibrium equations, gram given by £gure (6.16). The free-body diagram and the load vector are given in the global
¡ ¢
M,1 + N 2 a3 − N 3 a2 dΘ1 = 0,
¾
R -
M,1i ai + M i ai,1 + N 2 a3 − N 3 a2 = 0,
¾ r - M̄ 2
e1 ?
¾ e3 ?
top, height h/2 bottom, £xed support
and £nally ϕ
2 R
¡ ¢ ϕ
M,1i + M k Γk1 ai + N 2 a3 − N 3 a2 = M i |1 ai + N 2 a3 − N 3 a2 = 0. ?e2 µ
i
N̄ 2
Again this equation system represents three equations, one for each direction of the basis of the ?N̄ 3 N̄ 1 M̄ 1
rS M̄ 3 ¾ ¾¾
moving trihedron, £rst for i = 1, i.e. in the direction of the base vector a 1 , and k = 1, . . . , 3, R
¾
R̄1 ®S
T
M,11 + M 1 Γ111 + M 2 Γ211 + M 3 Γ311 = 0, 3
¡ ¢ R̄ R̄2
M,11 + 0 + M 2 −c2 r + 0 = 0,
M,11 − c2 rM 2 = 0, ?
in direction of the base vector a2 , i.e. for i = 2, and k = 1, . . . , 3, Figure 6.16: The free-body diagram of the loaded spiral staircase.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
208 Chapter 6. Exercises 6.6. The Moving Trihedron, Derivatives and Space Curves 209
Than the equilibrium conditions of moments in the directions of the base vectors e i of the global and in matrix notation
Cartesian coordinate system w.r.t. the point T are established £ k¤ £ k¤ r £ ¤
δi = T.r [S.i ] Ã T.rk = [S.ir ]−1 .
X ³ ³ π´ ϕ´
Me(T1 ) = 0 Ã R̄3 −r sin ϕ − + rS sin + M̄ 1 = 0 Because both basis, i.e. ei , and ai , are orthonormal basis, the tensor T must describe an or-
³ ³ 2π ´ 2 ´
ϕ
thonormal rotation! The tensor of an orthonormal rotation is characterized by
ÃM̄ 1 = R̄3 r sin ϕ − − rS sin , T−1 = TT , resp. T = S−1 = ST ,
³ 2 ³ 2
X
(T ) 3 ϕ π ´´
Me 2 = 0 Ã − R̄ rS cos + r cos ϕ − + M̄ 2 = 0 i.e. in matrix notation
³ 2 ³ 2 ´´
ϕ π −cr sin ϕ cr cos ϕ −ca
ÃM̄ 2 = R̄3 rS cos − r cos ϕ − ,
2 2 [T.ir ] = [S.ir ]−1 = [S.ir ]T = − cos ϕ − sin ϕ
0 .
X
(T )
Me3 = 0 ÃM̄ = 0.3 −ca sin ϕ ca cos ϕ cr
With this the relations between the base vectors of the different basis could be given by
Finally the resulting equations of equilibrium are given by
ai = S.ir er , resp. er = T.rk ak ,
3¡ ¡ ¢ ¢
0 R̄ ¡ r sin ϕ − π2 − ¡rS sin ϕ2¢¢ and e.g. in detail
ϕ π
N̄ = 0 , and M̄ = R̄3 rS cos 2 − r cos ϕ − 2 .
−cr sin ϕ − cos ϕ −ca sin ϕ a1
−R̄3 0 £ ¤T
[er ] = T.rk [ak ] = cr cos ϕ − sin ϕ ca cos ϕ a2 ,
Now the problem is, that the equilibrium conditions are given in the global Cartesian coordinate −ca 0 cr a3
system, but the results should be descirbed in the basis of the moving trihedron. For this reason it or
is necessary to transform the results from above, i.e. N = N̄ i ei , and M = M̄ i ei , into N = N i ai , e1 = −cr sin ϕ a1 − cos ϕ a2 − ca sin ϕ a3
and M = M i ai ! The Cartesian basis ei should be transformed by a tensor S into the basis ai ,
e2 = cr cos ϕ a1 − sin ϕ a2 + ca cos ϕ a3
S = S rs er ⊗ es à ai = Sei = S rs δsi er = S.ir er , e3 = −ca a1 + cr a3 .
With this it is easy to compute the £nal results, i.e. to transform N = N̄ i ei , and M = M̄ i ei , into
i.e. in matrix notation N = N i ai , and M = M i ai . With the known transformation ei = T.ik ak the force vector could
be represented by
−cr sin ϕ − cos ϕ −ca sin ϕ
[ai ] = [S.ir ]T [er ] , with [S.ir ] = cr cos ϕ − sin ϕ ca cos ϕ , N = N̄ i ei = N̄ i T.ik ak = N k ak .
−ca 0 cr Comparing only the coef£cients implies
N k = T.ik N̄ i ,
see also the de£nitions for the base vectors of the moving trihedron in the sections above! Then
the retransformation from the basis of the moving trihedron into the global Cartesian basis should and with this the coef£cients of the force vector N w.r.t. the basis of the moving trihedron in the
be given by S−1 = T, with sectional area at point T are given by
1 1 3
N −cr sin ϕ cr cos ϕ −ca 0 N R̄ ca
ei = Tai , with T = T.sr ar ⊗ as , N 2 = − cos ϕ − sin ϕ 0 0 ⇒ N 2 = 0 .
ei = (T.sr ar ⊗ as ) ai = T.sr δis ar = T.ir ar . N3 −ca sin ϕ ca cos ϕ cr −R̄3 N3 −R̄3 cr
Comparing this transformation relations implies By the analogous comparison the coef£cients of the moment vector M w.r.t. the basis of the
moving trihedron in the sectional area at point T are given by
1 1 1 ¡ ¢
ai = S.ir er = S.ir T.rm am | ·ak , M −cr sin ϕ cr cos ϕ −ca M̄ M cr ¡−M̄ 1 sin ϕ + M̄ 2 cos ϕ¢
2
M = − cos ϕ − sin ϕ 2 2 1 2
δik = S.ir T.rm δm
k
, 0 M̄ ⇒ M = −¡ M̄ cos ϕ + M̄ sin ϕ ¢ .
M3 −ca sin ϕ ca cos ϕ cr 0 M3 ca −M̄ 1 sin ϕ + M̄ 2 cos ϕ
δik = T.rk S.ir ,
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
210 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 211
6.7 Tensors, Stresses and Cylindrical Coordinates the stress tensor σ (, for the geometrically linear theory,) is given by
6.7.1 The Problem £ ¤ 6 0 2
ij ij
σ = σ gi ⊗ g j , with σ = 0 0 0 .
The given cylinderical shell is described by the parameter lines Θ i , with i = 1, 2, 3. The relations 2 0 5
x = x i ei ,
and the normal vector n in the Cartesian coordinate system at the point P is given by
g3 g1 + g 3
* : n= = n r er .
n |g1 + g3 |
P
g1 • Compute the covariant base vectors gi and the contravariant base vectors g i at the point P .
ª R
g2
¡ ¢
3 • Determine the coef£cients of the tensor σ w.r.t. the basis in mixed formulation, gi ⊗ gk ,
(8, 0, 4)Θ and w.r.t. the Cartesian basis, (ei ⊗ ek ).
Θ1 º e1 x
6
Θ3
* Θ2 • Work out the physical components from the contravariant stress tensor.
- - *
e3
(0, 0, 0)Θ (0, 2, 0)Θ e2 ¼ (8, 2, 0)Θ (8, 0, 0)Θ
4 • Determine the invariants,
2 3 3 2 [cm] Iσ = tr σ,
¾ -¾ -¾ -¾ -¼
1¡ ¢
Figure 6.17: The given cylindrical shell. IIσ = (tr σ)2 − tr (σ)2 ,
2
IIIσ = det σ,
between the Cartesian coordinates and the parameters, i.e. the curvilinear coordinates, are given for the three different representations of the stress tensor σ.
by the vector of position x,
³π ´ • Calculate the principal stresses and the principal stress directions.
¡ ¢
x1 = 5 − Θ2 sin Θ1 ,
a • Compute the speci£c deformation energy W spec = 12 σ : ε at the point P , with
x2 = −Θ3 , and
¡ ¢ ³π ´
x3 = − 5 − Θ2 cos Θ1 , 1
a ε= (gik − δik ) gi ⊗ gk .
100
where a = 8.0 is a constant length. At the point P de£ned by
• Determine the stress vector tn at the point P w.r.t. the sectional area given by the normal
µ ¶
8 2 vector n. Furthermore calculate the normal stress t⊥ , and the resulting shear stress tk for
P =P Θ1 = ; Θ 2 = ; Θ 3 = 2 , this direction.
10 10
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
212 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 213
6.7.2 Co- and Contravariant Base Vectors The contravariant base vectors g i are de£ned by
The vector of position x is given w.r.t. the Cartesian base vectors e i . The covariant base vectors gi = g ik gk ,
gi of the curvilinear coordinate system are de£ned by the £rst derivatives of the vector of position
w.r.t. the parameters Θi of the curvilinear coordinate lines in space, i.e. and for i = 1, . . . , 3 the relations between co- and contravariant base vectors are given by
i
∂x ∂x
gk = = ei . g 1 = b 2 g1 , g 2 = g2 , and g3 = g3 ,
∂Θk ∂Θk
With this de£nition the covariant base vectors are computed by and £nally with abbreviations
∂x ¡ ¢π ³π ´ ¡ ¢π ³π ´
g1 = 1
= 5 − Θ2 cos Θ 1 e1 + 0 e 2 + 5 − Θ2 sin Θ 1 e3 , cos c − sin c 0
∂Θ a a a a g1 = b 0 , g2 = 0 , and g3 = −1 ,
∂x ³π ´ ³π ´
g2 = 2
= − sin Θ 1 e1 + 0 e2 + cos Θ 1 e3 , sin c cos c 0
∂Θ a a
∂x or in detail
g3 = = 0 e1 + (−1) e2 + 0 e3 ,
∂Θ3 ¡ π 1 ¢ ¡π ¢
cos a Θ − sin Θ1 0
and £nally the covariant base vectors of the curvilinear coordinate system are given by 1 (5 − Θ2 ) π 2
a
3
¡ π 1 ¢ ¡ ¢ g = ¡0 ¢
, g = ¡0π
, and g = −1 .
cos a Θ − sin πa Θ1 0 a ¢
sin πa Θ1 cos Θ1 0
2 π
¡ ¢ a
g1 = 5 − Θ 0 , g 2 = 0 , and g 3 = −1 ,
a ¡ ¢ ¡ ¢
sin πa Θ1 cos πa Θ1 0 At the given point P the co- and contravariant base vectors g i , and gi , are given by
or by
¡ π ¢
cos 10
¡ π ¢
cos 10
cos c − sin c 0 3π 5
1 g1 = 0¡ ¢ , g1 = 0 ,
g1 = 0 , g2 = 0 , and g3 = −1 ,
5 π 3π ¡π¢
b sin 10 sin 10
sin c cos c 0 ¡ π
¢
− sin 10
with the abbreviations 2
g2 = g = 0¡ ¢ , and
a 5 π 1 π cos 10 π
b= = , and c = Θ = .
(5 − Θ2 ) π 3π a 10
0
In order to determine the contravariant base vectors of the curvilinear coordinate system, it is g3 = g3 = −1 .
necessary to multiply the covariant base vectors with the contravariant metric coef£cients. The 0
contravariant metric coef£cients g ik could be computed by the inverse of the covariant metric
coef£cients g ik ,
£ ¤−1 6.7.3 Coef£cients of the Various Stress Tensors
[gik ] = g ik .
So the £rst step is to compute the covariant metric coef£cients g ik , The stress tensor σ is given by the covariant basis gi of the curvilinear coordinate system, and
1 the contravariant coef£cients σ ij . The stress tensor w.r.t. to the mixed basis is determined by
b2
0 0
gik = gi · gk , i.e. [gik ] = 0 1 0 . σ = σ im gi ⊗ gm , and gm = gmk gk ,
0 0 1 σ = σ im gmk gi ⊗ gk = σ ik gi ⊗ gk ,
The relationship between the co- and contravariant metric coef£cients is used in its inverse form,
in order to compute the contravariant metric coef£cients, i.e. the coef£cient matrix is given by
2 1
£ ik ¤ £ ik ¤ b 0 0 £ ¤ £ ¤ 6 0 2 b2
0 0
−1
g = [gik ] , resp. g = 0 1 0 . σ ik = σ im
[gmk ] = 0 0 0 0 1 0 .
0 0 1 2 0 5 0 0 1
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
214 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 215
Solving this matrix product implies the coef£cient matrix [σ ik ] w.r.t. the basis gi ⊗ gk , i.e. the Solving this matrix product implies the coef£cient matrix of the stress tensor w.r.t. the Cartesian
stress tensor w.r.t. to the mixed basis is given by basis, i.e. the stress tensor w.r.t. the Cartesian basis is given by
6 6
0 2 b2
cos2 c − 2b cos c b62 sin c cos c
£ i ¤ b2
i k
σ = σ k gi ⊗ g , with σ k = 0 0 0 ,
σ = σ̃ik ei ⊗ ek , with [σ̃rs ] = − 2b cos c 5 − 2b sin c ,
2
0 5 b
6
2 sin c cos c − 2
b
sin c 6
b2
sin2 c
b2
and £nally at the point P the stress tensor w.r.t. the mixed basis is given by and £nally at the point P the stress tensor w.r.t. the Cartesian basis is given by
54π2 π 54π 2
54π2 cos2 10 − 6π π
cos 10 sin 10π π
cos 10
25
0 2 25
6π π
5 25
6π π
i k
£ i ¤ σ = σ̃ik ei ⊗ ek , with [σ̃rs ] = − 5 cos 10 5 − 5 sin 10 .
σ = σ k gi ⊗ g , with σk = 0 0 0 . 54π 2 54π 2
18π 2
π π 6π
sin 10 cos 10 − 5 sin 10 π
sin2 10
π
25
0 5 25 25
The relationships between the Cartesian coordinate system and the curvilinear coordinate system 6.7.4 Physical Components of the Contravariant Stress Tensor
are described by
The physical components of a tensor are de£ned by
gi = Bei , and B = Bmn em ⊗ en , √
∗ g(k)(k)
gi = (Bmn em ⊗ en ) ei = Bmn δin em = Bmi em , τ ik = τ ik p ,
g (i)(i)
and because of the identity of co- and contravariant base vectors in the Cartesian coordinate ∗
see also the lecture notes. The physical components τ ik of a tensor τ = τ ik gi ⊗ gk consider,
system, that the base vectors of an arbitrary curvilinear coordinate system do not have to be unit vectors!
In Cartesian coordinate systems the base vectors ei do not in¤uence the physical value of the
gi = Bki ek = Bki ek . components of the coef£cient matrix of a tensor, because the base vectors are unit vectors and
orthogonal to each other. But in general coordinates the base vectors do in¤uence the physical
This equation represented by the coef£cient matrices is given by
value of the components of the coef£cient matrix, because they are in general no unit vectors,
1 and not orthogonal to each other. Here the contravariant stress tensor is given by
b
cos c − sin c 0
T £ k¤
[gi ] = [Bki ] e , with [Bki ] = 0 0 −1 ,
6 0 2
1
b
sin c cos c 0 σ = σ ij gi ⊗ gk , with
£ ¤
σ ik = 0 0 0 .
see also the de£nition of the covariant base vectors above. The stress tensor σ w.r.t. the Cartesian 2 0 5
basis is computed by In order to compute the physical components of the stress tensor σ, it is necessary to solve the
ik r s de£nition given above. The numerator and denominator of this de£nition are given by the square
σ = σ gi ⊗ g k , and gi = Bri e , resp. gk = Bsk e ,
roots of the co- and contravariant metric coef£cients g (i)(i) , and g (i)(i) , i.e.
σ = σ ik Bri Bsk er ⊗ es ,
√ 1 √ √
g11 = g22 = 1 g33 = 1,
and with the abbreviation p b p p
g 11 =b g 22 = 1 g 33 = 1.
ik
σ̃rs = Bri σ Bsk ,
Finally the coef£cient matrix of the physical components of the contravariant stress tensor σ =
the coef£cient matrix of the stress tensor w.r.t. the Cartesian basis is de£ned by σ ik gi ⊗ gk is given by
6
1 1 1
√ 0 2b
b
cos c − sin c 0 6 0 2 b
cos c 0 b
sin c ∗ ik ik
g(k)(k) h∗ i
ik
b2
£ ik ¤ T
[σ̃rs ] = [Bri ] σ [Bsk ] = 0 0 −1 0 0 0 − sin c 0 cos c . σ =σ p , and σ = 0 0 0 .
1
g (i)(i) 2
0 5
b
sin c cos c 0 2 0 5 0 −1 0 b
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
216 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 217
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
218 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 219
Solving this implies the same intermediate results, In order to solve this equation, £rst the new tensor υ T is computed by
µ ¶ µ ¶ µ ¶ ¡ ¢
36 4 36 2 4 36 υ T = σ T σ T = σ ki gi ⊗ gk (σ sr gr ⊗ gs ) = σ ki σ sr gkr gi ⊗ gs
tr σ 2 = 4 cos4 c + 2 2 cos2 c + 2 sin c cos 2
c + sin 2
c + 4 sin4 c + 25
b b b4 b2 b υ si gi ⊗ gs = σ ki σ sk gi ⊗ gs = σ sk σ ki gi ⊗ gs ,
36 ¡ 4 2 2 4
¢ 8 ¡ 2 2
¢
= 4 cos c + 2 sin c cos c + sin c + 2 cos c + sin c + 25 and the coef£cient matrix of this new tensor is given by
b b £ si ¤ £ is ¤T £ ¤
36 8 υ = υ = [σ sk ] σ ki ,
tr σ 2 = 4 + 2 + 25, 6 36
b b 0 2 6 0 2 +4 0 12
+ 10
b2 b2 b2
i.e. the further steps are the same like above. And with this £nally the second invariant [υsi ] = 0 0 0 0 0 0 = 0 0 0 .
2 12 4
w.r.t. the basis of the Cartesian coordinate system is given by b2
0 5 2 0 5 b2
+ 10 0 b2
+ 25
26 In order to solve the scalar product υ T : σ, given by
IIσ = , too.
b2 ¡ ¢ ¡ ¢ ¡ ¢
υ T : σ = σ T σ T : σ = υ si gi ⊗ gs : σ lm gl ⊗ gm ,
The third invariant IIIσ of the stress tensor is de£ned by the determinant of the stress tensor, i.e. υ T : σ = υ si σ lm gil gsm = υ si gil σ ls = υ sl σ ls ,
1 1 ¡ ¢ 1 ¡ T T¢ £rst the coef£cient matrix of the tensor υ w.r.t. the mixed basis is computed by
IIIσ = det σ = (tr σ)3 − (tr σ) tr σ 2 + σ σ : σ. £ ¤
6 2 3 [υ sl ] = υ si [gil ] ,
• In order to compute the third invariant it is necessary to solve the three terms in the sum- 36 1 36
b2
+ 4 0 12 b2
+ 10 b2
0 0 b4
+ b42 0 12b2
+ 10
mation of the de£nition of the third invariant. The £rst term is given by s
[υ l ] = 0 0 0 0 1 0 = 0 0 0 ,
12
µ ¶3 µ ¶
b2
+ 10 0 b42 + 25 0 0 1 12
b4
+ 10
b2
0 b42 + 25
1 1 6 1 216 540 450
(tr σ)3 = + 5 = + + + 125 and then the £nal result for the third term is given by
6 6 b2 6 b6 b4 b2
1 36 90 75 125 1 ¡ T T¢ 1 1 1
(tr σ)3 = 6 + 4 + 2 + , σ σ : σ = υ T : σ = σ kl σ ls σ sk = υ sl σ ls
6 b b b 6 3 3 ·µ 3¶ µ 3 ¶ µ ¶ µ ¶¸
and the second term is given by 1 36 4 6 12 2 12 10 4
= + + + 10 + 2 + + 5 + 25
µ ¶µ ¶ 3 b4 b2 b2 b2 b2 b4 b2 b2
1 ¡ ¢ 1 6 36 8 1 ¡ T T¢ 72 24 20 125
− (tr σ) tr σ 2 = − + 5 + + 25 σ σ :σ= 6 + 4 + 2 + .
2 2 b2 b4 b2 3 b b b 3
µ ¶
1 ¡ ¢ 108 114 95 125 Then the complete third invariant w.r.t. the covariant basis of the curvilinear coordinate
− (tr σ) tr σ 2 = − + + + .
2 b6 b4 b2 2 system is given by
The third term is not so easy to compute, because it does not include the trace, but a scalar 1 1 ¡ ¢ 1 ¡ T T¢
IIIσ = det σ = (tr σ)3 − (tr σ) tr σ 2 + σ σ :σ
product and a tensor product with the transpose of the stress tensor, i.e. 6
µ 2 ¶ µ3 ¶
36 90 75 125 108 114 95 125
1 ¡ T T¢ 1 = + + + − + + +
σ σ : σ = υT : σ , with υ T = σ T σ T = (σσ)T . b6 b4 b2 6 b6 b4 b2 2
3 3 µ
72 24 20 125
¶
or in index notation + + 4 + 2 +
b6 b b 3
¡ T T¢ ¡ ¢ ¡ ¢ 1 1 1
σ σ : σ = σ ki gi ⊗ gk (σ sr gr ⊗ gs ) : σ lm gl ⊗ gm = 6 (36 − 108 + 72) + 4 (90 − 114 + 24) + 2 (75 − 95 + 20)
¡ ki sr ¢ ¡ lm ¢ ¡ ki s ¢ ¡ ¢ bµ b b
= σ σ gkr gi ⊗ gs : σ gl ⊗ gm = σ σ k gi ⊗ gs : σ lm gl ⊗ gm 125 125 125
¶
+ − +
= σ ki σ sk σ lm gil gsm = σ kl σ sk σ ls , 6 2 3
¡ T T
¢
σ σ : σ = σ kl σ ls σ sk . IIIσ = det σ = 0.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
220 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 221
• For the third invariant w.r.t. the mixed basis of the curvilinear coordinate system the £rst 6.7.6 Principal Stress and Principal Directions
and second term are already known, because all scalar quantities, like tr σ, are still the
Starting with the stress tensor w.r.t. the Cartesian basis, and a normal unit vector, i.e.
same, see also the £rst case of determining the third invariant. It is only necessary to have
a look at the third term given by σ = σ̃ik ei ⊗ ek , and n 0 = n r er = n r er ,
¡ T T¢ ¡ ¢ ¡ ¢
σ σ : σ = σ ik gk ⊗ gi (σ rs gs ⊗ gr ) : σ lm gl ⊗ gm
¡ i r s k ¢ ¡ l ¢ ¡ ¢ ¡ ¢ the eigenvalue problem is given by
= σ k σ s δi g ⊗ gr : σ m gl ⊗ gm = σ sk σ rs gk ⊗ gr : σ lm gl ⊗ gm
σn0 = λn0 ,
= σ sk σ rs σ lm δlk δrm = σ sl σ ms σ lm ,
¡ ¢ and the left-hand side is rewritten in index notation,
σ T σ T : σ = σ sl σ lm σ ms ,
i.e. this scalar product is the same like above. With all three terms of the summation given (σ̃ik ei ⊗ ek ) nr er = σ̃ik nr ei δkr = σ̃ir nr ei .
by the same scalar quantities like above, the third invariant w.r.t. the mixed basis of the
curvilinear coordinate system is given by The eigenvalue problem in index notation w.r.t. the Cartesian basis is given by
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
222 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 223
this implies the second and third eigenvalue, i.e. the second and third principal stress, • The principal stress λ1 = 0 implies
r r
1 d2 1 d2 6 26 n13 = 0 ⇒ n11 = 0 ⇒ n12 = α ∈ R,
λ2 = d + − e , and λ3 = d − − e , with d = 2 + 5 , and e = 2 . n1 = n1 g1 + n2 g2 + n3 g3 = 0 + αg2 + 0 = αg2 ,
2 4 2 4 b b
− sin c
In order to compute the principal directions the stress tensor w.r.t. the curvilinear basis, and a
n1 = α 0 .
normal unit vector, i.e.
cos c
σ = σ ik gi ⊗ gk , and n = nr gr ,
q
are used, then the eigenvalue problem is given by • The principal stress λ2 = 12 d + d2
− e implies
4
σn = λn, 1
n13 = β ∈ R ⇒ n12 = 0 n11 = − b2 (5 − λ2 ) β = γβ,
⇒
2
and the left-hand side is rewritten in index notation, n2 = n1 g1 + n2 g2 + n3 g3 = βγg1 + 0 + βg3 ,
¡ ¢
σn = σ ik gi ⊗ gk nr gr = σ ik nr gkr gi = σ ir nr gi . γb cos c
1
n2 = β −1 , with γ = − b2 (5 − λ2 ) .
2
The eigenvalue problem in index notation w.r.t. the curvilinear basis is given by γb sin c
q
σ̃ ik nk gi = ni gi = nk δki gi , • The principal stress λ3 = 12 d − d2
− e implies
¡ ¢ 4
σ k − λδ k nk gi = 0,
i i
¡ i ¢ γb cos c
σ k − λδ ik nk = 0, 1
n3 = β −1 , with γ = − b2 (5 − λ3 ) .
2
and in matrix notation γb sin c
6
b2
− λi 0 2 ni1
0 −λi 0 ni2 = 0. 6.7.7 Deformation Energy
2
b2
0 5 − λi ni3
The speci£c deformation energy is de£ned by
Combining the £rst and the last row of this system of equations yields an equation to determine
the coef£cient n i3 depending on the associated principal stress λi , i.e. 1 1
Wspec = σ : ε , with σ = σ ik gi ⊗ gk , and ε= (gik − δik ) gi ⊗ gk ,
· µ ¶ ¸ · µ ¶ ¸ 2 100
6 4 26 6 £ ¤
(5 − λi ) 2 − λi − 2 ni3 = 2 − λi 5 + 2 + λ2i ni3 = e − λi d + λ2i ni3 = 0. and solving this product yields
b b b b µ ¶
1 1 ¡ lm ¢ 1
The coef£cient n i2 could be computed by the second line of this system of equations, i.e. Wspec = σ : ε = σ gl ⊗ g m : (gik − δik ) gi ⊗ gk
2 2 100
1 lm i k 1 ik
−λi ni2 = 0, = σ (gik − δik ) δl δm = σ (gik − δik )
200 200
1 ik ¡ ¢
and then the coef£cient n i1 depending on the associated principal stress λi and the already known = σ gik − δik ,
200
coef£cient n i3 is given by 1 1 ¡ i ¢
2ni3 Wspec = σ : ε = σ i − σ ii ,
ni1 = − 6 . 2 200
b 2 − λi
because the Kronecker delta δik is given w.r.t. to the Cartesian basis, i.e.
The associated principal direction to the principal stresses are computed by inserting the values
λi in the equations above. δik = δi k = δ ik = δik .
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
224 Chapter 6. Exercises 6.7. Tensors, Stresses and Cylindrical Coordinates 225
With the trace of the stress tensor w.r.t. to the mixed basis of the curvilinear coordinate system, implies £nally the stress vector t n at the point P ,
and the trace of the stress tensor w.r.t. to the covariant basis of the curvilinear coordinate system ¡ 6 ¢
given by b2
+ 2 cos c (6 + 2b2 ) cos c
1 2 1 −2b − 5b3 .
6 tn = √ − − 5b = √
σ ii = 2 + 5 , and σ ii = 11, b2 + 1 ¡ 6 +b 2¢ sin c b2 b2 + 1 (6 + 2b2 ) sin c
b b2
the speci£c deformation energy is given by
The normal stress vector is de£ned by
µ ¶
1 1 6
Wspec = σ : ε = − 6 , t⊥ = σn , with σ = |t⊥ | = tn · n,
2 200 b2
and £nally at the point P , and the shear stress vector is de£ned by
µ ¶
1 54π 2 tk = t n − t ⊥ .
Wspec = −6 ≈ 0.0766.
200 25
The absolute value of the normal stress vector t⊥ is computed by
6.7.8 Normal and Shear Stress (6 + 2b2 ) cos c cos c
1 3 1
The normal vector n is de£ned by σ = tn · n = √ −2b − 5b √ −b
g1 + g 3 b2 b2 + 1 (6 + 2b2 ) sin c b2 + 1 sin c
n= ,
|g1 + g3 | 5b4 + 4b4 + 6
σ = tn · n = .
and this implies with b2 (b2 + 1)
1
cos c r This implies the normal stress vector
b 1
g1 + g3 = −1 , and |g1 + g3 | = + 1,
1 b2 cos c
b
sin c 5b4 + 4b2 + 6
t⊥ = σn = √ −b ,
b2 (b2 + 1) b2 + 1 sin c
£nally
cos c
1 −b = nr er = nr er . and the shear stress vector
n= √
b2 + 1 sin c
(6 + 2b2 ) cos c 4 2 cos c
1 3 5b + 4b + 6
With the stress tensor σ w.r.t. the Cartesian basis, tk = t n − t ⊥ = √ −2b − 5b − √ −b ,
b2 b2 + 1 (6 + 2b2 ) sin c b2 (b2 + 1) b2 + 1 sin c
σ = σ̃ik ei ⊗ ek ,
cos c
4 − 3b2 1
the stress vector tn at the point P is given by tk = √ .
(b2 + 1) b2 + 1 sinb c
tn = σn = σ̃ik ei ⊗ ek nr er = σ̃ik nr δkr ei = σ̃ik nk ei ,
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
226 Chapter 6. Exercises
Appendix A
Formulary
T = T ik gi ⊗ gk = Tik gi ⊗ gk
= T ik gi ⊗ gk = Ti k gi ⊗ gk
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 227
228 Appendix A. Formulary A.1. Formulary Tensor Algebra 229
¡ ¢
for the general tensor T of rank n det T ik 6= 0 A.1.10 Computing the Tensor Components
¡ ¢
T = T ik gi ⊗ gk T ik = gi · T · gk ; Tik = gi · (T · gk ) ; Tki = gi · (T · gk )
¡ ¢
T · w = T ik gi · gk · wm gm = T ik wk gi T ik im kn
= g g Tmn ; Tki im
= g Tmk ; etc.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
230 Appendix A. Formulary A.1. Formulary Tensor Algebra 231
¡ ¢
g i = A · gi with A = gk · gm gk ⊗ gm = Akm gk ⊗ gm etc.
¡ k ¢
gi = g · gi gk = Aki gk
B·B=1 or B·B=1
m i
g i = A · gi with k
A = (gk · gm ) g ⊗ g = Akm g ⊗ g m k m Bmi B k = δki etc. B m Bkm = δki
gi = (gk · gi ) gk = Aki gk Furthermore
k k
i i
¡ k
¢ i
¡ i
¢ k Ami Bmk = δik ; Bmk = A m ; Bmk = A m
g = 1 · g = g ⊗ g k · g = gk · g g = Bki gk
¡ ¢ ¡ ¢ etc.
k
gi = 1 · g i = g k ⊗ g k · g i = g k · g i g k = A i g k
k
¡ ¢ vi = A i vk = Aki vk ,
k
gi = A · g i with A = g k · gm g k ⊗ g m = A m g k ⊗ g m i ki
¡ k ¢ vi = B k vk = B vk ,
k
gi = g · g i g k = A i g k
i.e. the coef£cients of the vector components transform while changing the coordinate systems
m k m k like the base vectors themselves.
gi = A · g i with A = (gm · gk ) g ⊗ g = Amk g ⊗ g
gi = (gk · gi ) gk = Aki gk
¡ ¢ ¡ ¢
A.1.15 Transformation Rules for Tensors
i
gi = 1 · gi = gk ⊗ gk · gi = gk · gi gk = B k gk
m T = T ik gi ⊗ gk = T ik gi ⊗ gk = Tik gi ⊗ gk = Ti k gi ⊗ gk
gi = B · gi with B = (gk · gm ) gk ⊗ gm = B k gk ⊗ gm
ik i k
¡ ¢ i = T gi ⊗ gk = T k gi ⊗ gk = T ik gi ⊗ gk = T i gi ⊗ gk
gi = gk · gi gk = B k gk
the transformation relations between base vectors imply
¡ ¢ km
gi = B · gi with B = gk · gm gk ⊗ gm = B gk ⊗ gm ik i k
¡ ¢ ki T = A m A n T mn ,
gi = gk · gi gk = B gk i i
T k = A m Ank T m
n ,
The following relations between the transformation tensors hold
T ik = Ami Ank Tmn ,
A·A=1 or A·A=1
k m i.e. the coef£cients of the tensor components transform like the tensor basis. The tensor basis
Ami A m = δik etc. A i Akm = δik transform like the base vectors.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
232 Appendix A. Formulary A.2. Formulary Tensor Analysis 233
!
A vector £eld v = v (x) is differentiable in x, if a linear mapping L (x) exists, such that
det (T − λ1) = 0 ; ¡ ¢
¡ ¢ v (x + y) = v (x) + L (x) y + O y2 , if |y| → 0.
det T ik − λδki xi = 0
The mapping L (x) is called the gradient or Frechet derivative v 0 (x), also represented by the
characteristic polynomial operator
f (λ) = I3 − λI2 + λ2 I1 − λ3 = 0 L (x) = grad v (x) .
If T = TT ; Tki ∈ R, then the eigenvectors are orthogonal and the eigenvalues are real. invariants Analogous for a scalar valued vector function α (x)
of a tensor ¡ ¢
α (x + y) = α (x) + grad α (x) · y + O y2
I1 = λ1 + λ2 + λ3 = tr T = T ik rules
1£ ¤ grad (αβ) = α grad β + β grad α
I2 = λ 1 λ 2 + λ 2 λ 3 + λ 3 λ 1 = (tr T)2 − tr T2
2
1£ i k ¤ grad (v · w) = (grad v)T · w + (grad w)T · v
= T i T k − T ik T ki grad (αv) = v ⊗ grad α + α grad v
2 ¡ ¢
I3 = λ1 λ2 λ3 = det T = det T ik grad (v ⊗ w) = [(grad v) ⊗ w] · grad w
The gradient of a scalar valued vector function leads to a vector valued vector function. The
gradient of a vector valued vector function leads analogous to a tensor valued vector function.
divergence of a vector
div = tr (grad v) = grad v : 1
divergence of a tensor
¡ ¢ ¡ ¢
α · div T = div TT · α = grad TT · α : 1
rules
div (αv) = v grad α + α grad v
¡ ¢
div (T · v) = v · div TT + TT : grad v
div (grad v)T = grad (div v)
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
234 Appendix A. Formulary A.2. Formulary Tensor Analysis 235
A.2.3 Derivatives of Base Vectors in Components Notation A.2.6 Integral Theorems, Divergence Theorems
Z Z
u · nda = div udV
∂ (· · · )k ZA ZV
= (· · · )k,i with Γ(k) = grad (gk ) = gk,i ⊗ gi
∂θi ¡ s ¢ ui ni da = ui |i dV
gi,k = Γ(i) gk = Γil gs ⊗ gl gk = Γsik gs ; Z A
ZV
gi,k · gs = Γsik etc. i
g,k = −Γisk gs T · nda = div TdV
A
1 Z ZV
Γikl = gls Γsik etc. Γ = (gkl,i + gil,k − gik,l )
2 Ti k nk gi da = Ti k |k gi dV
ei,k = 0 A V
with n normal vector of the surface element
A surface
V volume
A.2.4 Components Notation of Vector Derivatives
∂v ∂ (v i gi )
grad v (x) = k
⊗ gk = ⊗ gk
∂θ ∂θk
∂v i ∂gi
= k gi ⊗ g k + v i k ⊗ g k
∂θ ∂θ
| {z }
Γ(i)
i
= v,k gi ⊗ gk + v i Γ(i)
i
= v,k gi ⊗ gk + v i gi,k ⊗ gk
¡ i ¢
grad v (x) = v,k + v s Γisk gi ⊗ gk = v i |k gi ⊗ gk
div v (x) = tr (grad v) = v,ii + v s Γisi = v i |i
∂T k ∂ (T ij gi ⊗ gj ) k
div T (x) = g = g
∂θk ∂θk µ ¶ µ ¶
∂gi ∂gj
= T,kij (gi ⊗ gj ) gk + T ij k
⊗ gj · gk + T ij gi ⊗ k · gk
∂θ ∂θ
¡ ¢
= T,kik gi + T ik Γsik gs + T ij gi ⊗ Γsjk gs · gk
¡ ¢
= T,kik + T mk Γimk + T ij Γkjk gi = T ik |k gi
¡ k ¢ i
= T i,k − T km Γm m k k
ik + T i Γkm g = T i |k g
i
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
236 Appendix A. Formulary
Appendix B
Nomenclature
Notation Description
α, β, γ, . . . scalar quantities in R
a, b, c, . . . column matrices or vectors in Rn
aT , bT , cT , . . . row matrices or vectors in Rn
A, B, C, . . . matrices in Rn ⊗ Rn
a, b, c, . . . vectors or £rst order tensors in E n
A,
3 B, 3 C,3 ... second order tensors in En ⊗ En
A, B, C, . . . third order tensors in En ⊗ En ⊗ En
A, B, C, . . . fourth order tensors in En ⊗ En ⊗ En ⊗ En
Notation Description
tr the trace operator of a tensor or a matrix
det the determinant operator of a tensor or a matrix
sym the symmetric part of a tensor or a matrix
skew the antisymmetric or skew part of a tensor or a matrix
dev the deviator part of a tensor or a matrix
grad = ∇ the gradient operator
div the divergence operator
rot the rotation operator
∆ the laplacian or the Laplace operator
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 237
238 Appendix B. Nomenclature
Bibliography
[1] Ralph Abraham, Jerrold E. Marsden, and Tudor Ratiu. Manifolds, Tensor Analysis and
Applications. Applied Mathematical Sciences. Springer-Verlag, Berlin, Heidelberg, New
York, second edition, 1988.
[2] Albrecht Beutelspacher. Lineare Algebra. Vieweg Verlag, Braunschweig, Wiesbaden,
1998.
Notation Description [3] Reint de Boer. Vektor- und Tensorrechnung für Ingenieure. Springer-Verlag, Berlin, Hei-
R the set of the real numbers delberg, New York, 1982.
R3 the set of real-valued triples
E3 the 3-dimensional Euclidean vector space [4] Gerd Fischer. Lineare Algebra. Vieweg Verlag, Braunschweig, Wiesbaden, 1997.
E3 ⊗ E 3 the space of second order tensors over the Euclidean vector space
[5] Jimmie Gilbert and Linda Gilbert. Linear Algebra and Matrix Theory. Academic Press,
{e1 , e2 , e3 } 3-dimensional Cartesian basis
San Diego, 1995.
{g1 , g2 , g3 } 3-dimensional arbitrary covariant basis
{g1 , g2 , g3 } 3-dimensional arbitrary contravariant basis [6] Paul R. Halmos. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics.
gij covariant metric coef£cients Springer-Verlag, Berlin, Heidelberg, New York, 1974.
g ij contravariant metric coef£cients
g = g ij gi ⊗ gj metric tensor [7] Hans Karl Iben. Tensorrechnung. Mathematik für Ingenieure und Naturwissenschaftler.
Teubner-Verlag, Stuttgart, Leipzig, 1999.
[8] Klaus Jänich. Lineare Algebra. Springer-Verlag, Berlin, Heidelberg, New York, 1998.
[9] Wilhelm Klingenberg. Lineare Algebra und Geometrie. Springer-Verlag, Berlin, Heidel-
berg, New York, second edition, 1992.
[10] Allan D. Kraus. Matrices for Engineers. Springer-Verlag, Berlin, Heidelberg, New York,
1987.
[11] Paul C. Matthews. Vector Calculus. Undergraduate Mathematics Series. Springer-Verlag,
Berlin, Heidelberg, New York, 1998.
[12] James G. Simmonds. A Brief on Tensor Analysis. Undergraduate Texts in Mathematics.
Springer-Verlag, Berlin, Heidelberg, New York, second edition, 1994.
[13] Erwin Stein. Unterlagen zur Vorlesung Mathematik V für konstr. Ingenieure – Matrizen-
und Tensorrechnung SS 94. Institut für Baumechanik und Numerische Mechanik, Univer-
sität Hannover, 1994.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 239
240 Bibliography
[14] Rudolf Zurmühl. Matrizen und ihre technischen Anwendungen. Springer-Verlag, Berlin,
Heidelberg, New York, fourth edition, 1964.
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 241
242 Glossary English – German Glossary English – German 243
basis of the vector space - Basis eines Vektorraums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 complex conjugate eigenvalues - konjugiert komplexe Eigenwerte . . . . . . . . . . . . . . . . . . . . 124
bijective - bijektiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 complex numbers - komplexe Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
bilinear - bilinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 components - Komponenten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 65
bilinear form - Bilinearform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 components of the Christoffel symbol -
Komponenten des Christoffel-Symbols . . . . . . . . . . . . . . 142
binormal unit - Binormaleneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . 137
components of the permutation tensor -
binormal unit vector - Binormaleneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Komponenten des Permutationstensors . . . . . . . . . . . . . . . . 87
Cartesian base vectors - kartesische Basisvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 composition - Komposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34, 54, 106
Cartesian basis - kartesische Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 congruence transformation - Kongruenztransformation, kontragrediente Transformation
Cartesian components of a permutation tensor - 56, 63
kartesische Komponenten des Permutationstensor . . . . . . 87 congruent - kongruent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56, 63
Cartesian coordinates - kartesische Koordinaten . . . . . . . . . . . . . . . . . . . . . 78, 82, 144 continuum - Kontinuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Cauchy stress tensor - Cauchy-Spannungstensor . . . . . . . . . . . . . . . . . . . . . . . 96, 120 contravariant ε symbol - kontravariantes ε-Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Cauchy’s inequality - Schwarzsche oder Cauchy-Schwarzsche Ungleichung . . 21 contravariant base vectors - kontravariante Basisvektoren . . . . . . . . . . . . . . . . . . . . 81, 139
Cayley-Hamilton Theorem - Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 contravariant base vectors of the natural basis -
characteristic equation - charakteristische Gleichung . . . . . . . . . . . . . . . . . . . . . . . . . . 65 kontravariante Basisvektoren der natürlichen Basis . . . . 141
characteristic matrix - charakteristische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 contravariant coordinates - kontravariante Koordinaten, Koef£zienten . . . . . . . . . 80, 84
contravariant metric coef£cients -
characteristic polynomial - charakteristisches Polynom . . . . . . . . . . . . . . . . . . 56, 65, 123 kontravariante Metrikkoef£zienten . . . . . . . . . . . . . . . . 82, 83
Christoffel symbol - Christoffel-Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 coordinates - Koordinaten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
cofactor - Kofaktor, algebraisches Komplement . . . . . . . . . . . . . . . . . 51 covariant ε symbol - kovariantes ε-Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
column - Spalte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 covariant base vectors - kovariante Basisvektoren . . . . . . . . . . . . . . . . . . . . . . . . 80, 138
column index - Spaltenindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 covariant base vectors of the natural basis -
column matrix - Spaltenmatrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28, 40, 46 kovariante Basisvektoren der natürlichen Basis . . . . . . . 140
column vector - Spaltenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 40, 46, 59 covariant coordinates - kovariante Koordinaten, Koef£zienten . . . . . . . . . . . . . 81, 84
combination - Kombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 34, 54 covariant derivative - kovariante Ableitung . . . . . . . . . . . . . . . . . . . . . . . . . . 146, 149
commutative - kommutativ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 covariant metric coef£cients - kovariante Metrikkoef£zienten . . . . . . . . . . . . . . . . . . . . 80, 83
commutative matrix - kommutative Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 cross product - Kreuzprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87, 90, 96
commutative rule - Kommutativgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 curl - Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
commutative under matrix addition - curvature - Krümmung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
kommutativ bzgl. Matrizenaddition . . . . . . . . . . . . . . . . . . . 42 curvature of a curve - Krümmung einer Kurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
compatibility of vector and matrix norms -
curved surface - Raum¤äche, gekrümmte Ober¤äche . . . . . . . . . . . . . . . . . 138
Verträglichkeit von Vektor- und Matrix-Norm . . . . . . . . . . 22
curvilinear coordinate system - krummliniges Koordinatensystem . . . . . . . . . . . . . . . . . . . 139
compatible - verträglich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
curvilinear coordinates - krummlinige Koordinaten . . . . . . . . . . . . . . . . . . . . . . 139, 144
complete fourth order tensor - vollständiger Tensor vierter Stufe . . . . . . . . . . . . . . . . . . . . 129
complete second order tensor - vollständige Tensor zweiter Stufe . . . . . . . . . . . . . . . . . . . . . 99 de£nite metric - de£nite Metrik. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
complete third order tensor - vollständiger Tensor dritter Stufe . . . . . . . . . . . . . . . . . . . . 129 de£nite norm - de£nite Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
244 Glossary English – German Glossary English – German 245
directions of principal stress - Hauptspannungsrichtungen . . . . . . . . . . . . . . . . . . . . . . . . . 120 Euclidean vector space - euklidischer Vektorraum . . . . . . . . . . . . . . . . . . . . . 26, 29, 143
Euklidian matrix norm - Euklidische Matrixnorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
discrete metric - diskrete Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
even permutation - gerade Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
distance - Abstand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
exact differential - vollständiges Differential . . . . . . . . . . . . . . . . . . . . . . 133, 135
distributive - distributiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
distributive law - Distributivgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 £eld - Feld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
distributive w.r.t. addition - Distributivgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 £eld - Körper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
divergence of a tensor £eld - Divergenz eines Tensorfeldes . . . . . . . . . . . . . . . . . . . . . . . 147 £nite - endlich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
divergence of a vector £eld - Divergenz eines Vektorfeldes . . . . . . . . . . . . . . . . . . . . . . . 147 £nite element method - Finite-Element-Methode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
divergence theorem - Divergenztheorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 £rst order tensor - Tensor erster Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
domain - De£nitionsbereich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 fourth order tensor - Tensor vierter Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
246 Glossary English – German Glossary English – German 247
Frechet derivative - Frechet Ableitung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 inner product - inneres Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 85
free indices - freier Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 inner product of tensors - inneres Produkt von Tensoren . . . . . . . . . . . . . . . . . . . . . . . 110
function - Funktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 inner product space - innerer Produktraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 27
fundamental tensor - Fundamentaltensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .150 integers - ganze Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
integral theorem - Integralsatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Gauss transformation - Gaußsche Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
intersection - Schnittmenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Gauss’s theorem - Gauss’scher Integralsatz . . . . . . . . . . . . . . . . . . . . . . . 155, 158
invariance - Invarianz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
general eigenvalue problem - allgemeines Eigenwertproblem . . . . . . . . . . . . . . . . . . . . . . . 69
invariant - Invariante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120, 123, 148
general permutation symbol - allgemeines Permutationssymbol . . . . . . . . . . . . . . . . . . . . . 92
invariant - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
gradient - Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
inverse - Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8, 34
gradient of a vector of position - Gradient eines Ortsvektors . . . . . . . . . . . . . . . . . . . . . . . . . . 144
inverse of a matrix - inverse Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
higher order tensor - Tensor höherer Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 inverse of a tensor - inverser Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
homeomorphic - homöomorph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30, 97 inverse relation - inverse Beziehung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
homeomorphism - Homöomorphismus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 inverse transformation - inverse Transformation . . . . . . . . . . . . . . . . . . . . . . . . 101, 103
homogeneous - homogen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 inverse w.r.t. addition - inverses Element der Addition . . . . . . . . . . . . . . . . . . . . . . . 10
homogeneous linear equation system -
inverse w.r.t. multiplication - inverses Element der Multiplikation . . . . . . . . . . . . . . . . . . 10
homogenes lineares Gleichungssystem . . . . . . . . . . . . . . . . 65
inversion - Umkehrung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
homogeneous norm - homogene Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
invertible - invertierbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
homomorphism - Homomorphismus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Hooke’s law - Hookesche Gesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 isomorphic - isomorph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29, 35
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
248 Glossary English – German Glossary English – German 249
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
250 Glossary English – German Glossary English – German 251
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
252 Glossary English – German Glossary English – German 253
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
254 Glossary English – German Glossary English – German 255
tensor with contravariant base vectors and covariant coordinates - vector - Vektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 28, 127
Tensor mit kontravarianten Basisvektoren und kovaranten
vector £eld - Vektorfeld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Koef£zienten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
tensor with covariant base vectors and contravariant coordinates - vector function - Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Tensor mit kovarianten Basisvektoren und kontravaranten vector norm - Vektor-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 22
Koef£zienten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
vector of associated direction - Richtungsvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
tensor-valued function of multiple variables -
tensorwertige Funktion mehrerer Veränderlicher . . . . . . 134 vector of position - Ortsvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
tensor-valued scalar function - tensorwertige Skalarfunktion . . . . . . . . . . . . . . . . . . . . . . . 133 vector product - Vektorprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
tensor-valued vector function - tensorwertige Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . 143 vector space - Vektorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 49
third order fundamental tensor - Fundamentaltensor dritter Stufe . . . . . . . . . . . . . . . . . . . . . 128 vector space of linear mappings - Vektorraum der linearen Abbildungen . . . . . . . . . . . . . . . . . 33
third order tensor - Tensor dritter Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 vector-valued function - vektorwertige Funktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
topology - Topologie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 vector-valued function of multiple variables -
vektorwertige Funktion mehrerer Veränderlicher . . . . . . 134
torsion of a curve - Torsion einer Kurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
vector-valued scalar function - vektorwertige Skalarfunktion . . . . . . . . . . . . . . . . . . . . . . . 133
total differential - vollständiges Differential . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
vector-valued vector function - vektorwertige Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . 143
trace of a matrix - Spur einer Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
visual space - Anschauungsraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
trace of a tensor - Spur eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
volume - Volumen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
transformation matrix - Transformationsmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
volume element - Volumenelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
transformation of base vectors - Transformation der Basisvektoren . . . . . . . . . . . . . . . . . . . 101
transformation of the metric coef£cients - volume integral - Volumenintegral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Transformation der Metrikkoef£zienten . . . . . . . . . . . . . . . 84 volumetric matrix - Kugelmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
transformation relations - Transforamtionsformeln . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 volumetric part of a tensor - Kugelanteil eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
transformation tensor - Transformationstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 von Mises iteration - von Mises Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
transformed contravariant base vector -
transformierte kontravarianter Basisvektor . . . . . . . . . . . 103 whole numbers - natürliche Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
transformed covariant base vector -
transformierte kovarianter Basisvektor . . . . . . . . . . . . . . . 103 Young’s modulus - Elastizitätsmodul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
transpose of a matrix - transponierte Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
zero element - Nullelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
transpose of a matrix product - transponiertes Matrizenprodukt . . . . . . . . . . . . . . . . . . . . . . 44
zero vector - Nullvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
transpose of a tensor - Transponierter Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
zeros - Nullstellen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
triangle inequality - Dreiecksungleichung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16, 19
trivial solution - triviale Lösung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
union - Vereinigungsmenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
unit matrix - Einheitsmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
unitary space - unitärer Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
unitary vector space - unitärer Vektorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
usual scalar product - übliches Skalarprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
256 Glossary English – German
L2-Norm - L2-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
l2-Norm, euklidische Norm - l2-norm, Euclidian norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
n-Tupel - n-tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Abbildung - map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Abbildung - mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Ableitung einer skalaren Größe - derivative of a scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Ableitung eines Tensors - derivative of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Ableitung eines Vektors - derivative of a vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Ableitung nach einer skalaren Größe -
derivative w.r.t. a scalar variable . . . . . . . . . . . . . . . . . . . . . 133
Ableitungen - derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Ableitungen von Basisvektoren - derivatives of base vectors . . . . . . . . . . . . . . . . . . . . . 141, 145
Abstand - distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
additionsneutrales Element - additive identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10, 42
Additionsoperation - operation addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
additiv - additive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
ähnlich, kogredient - similar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55, 69
Ähnlichkeitstransformation, kogrediente Transformation -
similarity transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
äußeres Produkt - outer product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
af£ner Vektor - af£ne vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
af£ner Vektorraum - af£ne vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 54
allgemeines Eigenwertproblem - general eigenvalue problem . . . . . . . . . . . . . . . . . . . . . . . . . . 69
allgemeines Permutationssymbol -
general permutation symbol . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Anschauungsraum - visual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 257
258 Glossary German – English Glossary German – English 259
TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
260 Glossary German – English Glossary German – English 261
einfacher Tensor vierter Stufe - simple fourth order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 129 gemischte Formulierung eines Tensors zweiter Stufe -
mixed formulation of a second order tensor . . . . . . . . . . . . 99
einfacher Tensor zweiter Stufe - simple second order tensor . . . . . . . . . . . . . . . . . . . . . . . 94, 99
gemischte Komponenten - mixed components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Einheitsmatrix - identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 45
gerade Permutation - even permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Einheitsmatrix - unit matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Gesamtnorm - absolute norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Einheitsmatrix, Identität - identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Gleichgewicht der äußeren Kräfte -
Einheitstensor - identity tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 equilibrium system of external forces . . . . . . . . . . . . . . . . . 96
Einselement - one . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Gleichgewichtsbedingungen - equilibrium conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
elastisch - elastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Gradient - gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Elastizitätsmodul - Young’s modulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Gradient eines Ortsvektors - gradient of a vector of position . . . . . . . . . . . . . . . . . . . . . . 144
Elastizitätstensor - elasticity tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Hauptachse - principal axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Elastizitätstheorie - elasticity theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Hauptachsen - principal axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Elemente - elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Hauptachsenproblem - principal axes problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
endlich - £nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Hauptdiagonale - main diagonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 65
Euklidische Matrixnorm - Euklidian matrix norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Hauptspannungen - principal stresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
euklidische Norm - Euclidean norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 30, 85
Hauptspannungsrichtungen - directions of principal stress . . . . . . . . . . . . . . . . . . . . . . . . 120
euklidische Vektoren - Euclidean vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Hauptspannungsrichtungen - principal stress directions . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Euklidischer Raum - Euclidean space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Heben eines Index - raising an index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
euklidischer Vektorraum - Euclidean vector space . . . . . . . . . . . . . . . . . . . . . . 26, 29, 143
homogen - homogeneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Feld - £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 homogene Norm - homogeneous norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Finite-Element-Methode - £nite element method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 homogenes lineares Gleichungssystem -
homogeneous linear equation system . . . . . . . . . . . . . . . . . 65
Flächenvektor - area vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Homomorphismus - homomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Formänderungsenergie - deformation energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
homöomorph - homeomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30, 97
Frechet Ableitung - Frechet derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Homöomorphismus - homeomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
freier Index - free indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Hookesche Gesetz - Hooke’s law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Frenetsche Formeln - Serret-Frenet equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Höldersche Ungleichung - Hölder sum inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Fundamentaltensor - fundamental tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Fundamentaltensor dritter Stufe - Hülle - span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
third order fundamental tensor . . . . . . . . . . . . . . . . . . . . . . 128
in£nitesimal - in£nitesimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Funktion - function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
in£nitesimaler Tetraeder - in£nitesimal tetrahedron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
ganze Zahlen - integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 injektiv - injective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Gauss’scher Integralsatz - Gauss’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155, 158 innerer Produktraum - inner prodcut space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Gaußsche Transformation - Gauss transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 innerer Produktraum - inner product space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 27
gedrehtes Koordiantensystem - rotated coordinate system . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 inneres Produkt - inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 85
TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
262 Glossary German – English Glossary German – English 263
inneres Produkt von Tensoren - inner product of tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 kommutative Matrix - commutative matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Integralnorm, L1-Norm - L1-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Kommutativgesetz - commutative rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Integralsatz - integral theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 komplexe Zahlen - complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
invariant - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Komponenten - components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 65
Invariante - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120, 123, 148 Komponenten des Christoffel-Symbols -
components of the Christoffel symbol . . . . . . . . . . . . . . . . 142
Invarianz - invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Komponenten des Permutationstensors -
Inverse - inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8, 34 components of the permutation tensor . . . . . . . . . . . . . . . . . 87
inverse Beziehung - inverse relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Komposition - composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34, 54, 106
inverse Matrix - inverse of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 kongruent - congruent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56, 63
inverse Transformation - inverse transformation . . . . . . . . . . . . . . . . . . . . . . . . . 101, 103 Kongruenztransformation, kontragrediente Transformation -
congruence transformation . . . . . . . . . . . . . . . . . . . . . . . 56, 63
inverser Tensor - inverse of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
konjugiert komplexe Eigenwerte -
inverses Element der Addition - additive inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10, 42 complex conjugate eigenvalues . . . . . . . . . . . . . . . . . . . . . . 124
inverses Element der Addition - inverse w.r.t. addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Kontinuum - continuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
inverses Element der Multiplikation -
inverse w.r.t. multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . 10 kontravariante Basisvektoren - contravariant base vectors . . . . . . . . . . . . . . . . . . . . . . . 81, 139
kontravariante Basisvektoren der natürlichen Basis -
inverses Element der Multiplikation -
contravariant base vectors of the natural basis . . . . . . . . . 141
multiplicative inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
kontravariante Koordinaten, Koef£zienten -
invertierbar - invertible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 contravariant coordinates . . . . . . . . . . . . . . . . . . . . . . . . . 80, 84
isomorph - isomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29, 35 kontravariante Metrikkoef£zienten -
contravariant metric coef£cients . . . . . . . . . . . . . . . . . . 82, 83
Isomorphimus - isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
kontravariantes ε-Symbol - contravariant ε symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
isotrop - isotropic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Koordinaten - coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
isotroper Tensor - isotropic tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Koordinatenursprung, -nullpunkt -
Iterationsvorschrift - iterative process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 point of origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
kovariante Ableitung - covariant derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146, 149
Jacobi-Determinante - Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
kovariante Basisvektoren - covariant base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 80, 138
kartesische Basis - Cartesian basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 kovariante Basisvektoren der natürlichen Basis -
kartesische Basisvektoren - Cartesian base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 covariant base vectors of the natural basis . . . . . . . . . . . . 140
kovariante Koordinaten, Koef£zienten -
kartesische Komponenten des Permutationstensor -
covariant coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81, 84
Cartesian components of a permutation tensor . . . . . . . . . 87
kovariante Metrikkoef£zienten - covariant metric coef£cients . . . . . . . . . . . . . . . . . . . . . . 80, 83
kartesische Koordinaten - Cartesian coordinates . . . . . . . . . . . . . . . . . . . . . . . 78, 82, 144
Kofaktor, algebraisches Komplement - kovariantes ε-Symbol - covariant ε symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
cofactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Kreuzprodukt - cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87, 90, 96
Kombination - combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 34, 54 Kronecker-Delta - Kronecker delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 79, 119
kommutativ - commutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 krummlinige Koordinaten - curvilinear coordinates. . . . . . . . . . . . . . . . . . . . . . . . .139, 144
kommutativ bzgl. Matrizenaddition - krummliniges Koordinatensystem -
commutative under matrix addition . . . . . . . . . . . . . . . . . . . 42 curvilinear coordinate system . . . . . . . . . . . . . . . . . . . . . . . 139
TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
264 Glossary German – English Glossary German – English 265
TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
266 Glossary German – English Glossary German – English 267
nicht kommutativ - noncommutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69, 106 partielle Ableitungen - partial derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
nicht leere Menge - non empty set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 partielle Ableitungen von Basisvektoren -
partial derivatives of base vectors . . . . . . . . . . . . . . . . . . . . 145
nicht triviale Lösung - nontrivial solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Permutationen - permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
nicht-kommutativ - non-commutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Permutationssymbol - permutation symbol . . . . . . . . . . . . . . . . . . . . . . . . 87, 112, 128
Norm - norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 65
Permutationstensor - permutation tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
norm eines Tensors - norm of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
polare Zerlegung - polar decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
normale Basis - normal basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Polynom n-ten Grades - polynomial of n-th degree . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Normaleneinheitsvektor - normal unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Polynomzerlegung - polynomial factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Normaleneinheitsvektor - normal unit vector . . . . . . . . . . . . . . . . . . . . . . . . 121, 136, 138
positiv de£nit - positive de£nite . . . . . . . . . . . . . . . . . . . . . . . . . 25, 61, 62, 111
Normalenvektor - normal vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 135
positive Metrik - positive metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
normierter Raum - normed space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
positive Norm - positive norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Normquadrate - quadratic value of the norm . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Potentialeigenschaft - potential character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Nullabbildung - null mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Potenzreihe - power series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Nullelement - zero element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Produkt - product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Nullstellen - roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Produkt von Tensoren zweiter Stufe -
Nullstellen - zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 second order tensor product . . . . . . . . . . . . . . . . . . . . . . . . . 105
Nullvektor - zero vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Punktprodukt - dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
obenstehender Index - superscript index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 quadratisch - square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
obere Schranke - supremum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 quadratische Form - quadratic form . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 57, 62, 124
Ober¤äche - surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 quadratische Matrix - square matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Ober¤ächenelement - surface element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Querkontraktionszahl - Poisson’s ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Ober¤ächenintegral - surface integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Rang - rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Obermenge - superset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Rangabfall - reduction of rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Operation - operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
rationale Zahlen - rational numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Ordnung einer Matrix - order of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Raum - space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
orthogonal - orthogonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Raum der quadratischen Matrizen -
orthogonale Transformation - orthogonal transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 space of square matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
orthogonalen Matrix - orthogonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Raum der stetige Funktionen - space of continuous functions . . . . . . . . . . . . . . . . . . . . . . . . 14
orthogonaler Tensor - orthogonal tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Raum¤äche, gekrümmte Ober¤äche -
orthonormale Basis - orthonormal basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 curved surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Ortsvektor - position vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135, 152 Raumkurve - space curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Ortsvektoren - vector of position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Rayleigh-Quotient - Rayleigh quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67, 68
Rechenregeln für Skalarprodukte von Tensoren -
Parallelepiped - parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 identities for scalar products of tensors . . . . . . . . . . . . . . . 110
TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
268 Glossary German – English Glossary German – English 269
TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
270 Glossary German – English Glossary German – English 271
Tensor erster Stufe - £rst order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 überstrichene Basis - overlined basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Tensor höherer Stufe - higher order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 übliches Skalarprodukt - usual scalar product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Tensor mit kontravarianten Basisvektoren und kovaranten Koef£zienten - Umkehrung - inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
tensor with contravariant base vectors and covariant coordi-
ungerade Permutation - odd permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
nates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Tensor mit kovarianten Basisvektoren und kontravaranten Koef£zienten - unitärer Raum - unitary space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
tensor with covariant base vectors and contravariant coordi- unitärer Vektorraum - unitary vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
nates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 unsymmetrisch, nicht symmetrisch -
Tensor vierter Stufe - fourth order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 nonsymmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Tensor zweiter Stufe - second order tensor . . . . . . . . . . . . . . . . . . . . . . . . . 96, 97, 127 untenstehender Index - subscript index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Tensorfeld - tensor £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Untermenge - subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Tensorprodukt - tensor product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105, 106 Urbild - range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Tensorprodukt zweier Dyadenprodukte - Ursprung, Nullelement - origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
tensor product of two dyads . . . . . . . . . . . . . . . . . . . . . . . . . 106
Tensorraum - tensor space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Vektor - vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 28, 127
tensorwertige Funktion mehrerer Veränderlicher - Vektor-Norm - vector norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 22
tensor-valued function of multiple variables . . . . . . . . . . 134 Vektorfeld - vector £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
tensorwertige Skalarfunktion - tensor-valued scalar function . . . . . . . . . . . . . . . . . . . . . . . . 133 Vektorfunktion - vector function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
tensorwertige Vektorfunktion - tensor-valued vector function . . . . . . . . . . . . . . . . . . . . . . . 143 Vektorprodukt - vector product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Topologie - topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Vektorraum - vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 49
Torsion einer Kurve - torsion of a curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Vektorraum der linearen Abbildungen -
Transforamtionsformeln - transformation relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 vector space of linear mappings . . . . . . . . . . . . . . . . . . . . . . 33
Transformation der Basisvektoren - vektorwertige Funktion - vector-valued function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
transformation of base vectors . . . . . . . . . . . . . . . . . . . . . . 101 vektorwertige Funktion mehrerer Veränderlicher -
Transformation der Metrikkoef£zienten - vector-valued function of multiple variables . . . . . . . . . . 134
transformation of the metric coef£cients . . . . . . . . . . . . . . . 84
vektorwertige Skalarfunktion - vector-valued scalar function . . . . . . . . . . . . . . . . . . . . . . . . 133
Transformationsmatrix - transformation matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
vektorwertige Vektorfunktion - vector-valued vector function . . . . . . . . . . . . . . . . . . . . . . . 143
Transformationstensor - transformation tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Vereinigungsmenge - union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
transformierte kontravarianter Basisvektor -
transformed contravariant base vector . . . . . . . . . . . . . . . . 103 verträglich - compatible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
transformierte kovarianter Basisvektor - Verträglichkeit von Vektor- und Matrix-Norm -
transformed covariant base vector . . . . . . . . . . . . . . . . . . . 103 compatibility of vector and matrix norms . . . . . . . . . . . . . . 22
transponierte Matrix - matrix transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Verzerrungstensor, Dehnungstensor -
strain tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
transponierte Matrix - transpose of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Vietaschen Wurzelsätze - Newton’s relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Transponierter Tensor - transpose of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
vollständige Tensor zweiter Stufe -
transponiertes Matrizenprodukt - complete second order tensor . . . . . . . . . . . . . . . . . . . . . . . . 99
transpose of a matrix product . . . . . . . . . . . . . . . . . . . . . . . . 44
vollständiger Tensor dritter Stufe -
triviale Lösung - trivial solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 complete third order tensor . . . . . . . . . . . . . . . . . . . . . . . . . 129
TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003
272 Glossary German – English
TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003 273
274 Index Index 275
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
276 Index Index 277
linear transformation, 32 nonsingular square matrix, 55 potential character, 129 scalar triple product, 88, 90, 152
linear vector space, 12 nonsymmetric, 69 power series, 71 scalar-valued function of multiple variables,
linearity, 32, 34 nontrivial solution, 65 pre-multiplication, 45 134
linearly dependent, 15, 23, 49 norm, 18, 65 principal axes, 120 scalar-valued scalar function, 133
linearly independent, 15, 23, 48, 59, 66 norm of a tensor, 111 principal axes problem, 65 scalar-valued vector function, 143
lowering an index, 83 normal basis, 103 principal axis, 65 Schwarz inequality, 26, 111
normal unit, 136 principal stress directions, 122 second derivative, 133
main diagonal, 41, 65 normal unit vector, 121, 136, 138 principal stresses, 122 second order tensor, 96, 97, 127
map, 8 normal vector, 96, 135 product, 10, 12 second order tensor product, 105
mapping, 8 normed space, 18 proper orthogonal tensor, 116 section surface, 121
matrix, 40 null mapping, 33 semide£nite, 62
matrix calculus, 28 quadratic form, 26, 57, 62, 124 Serret-Frenet equations, 137
matrix multiplication, 42, 54 odd permutation, 50 quadratic value of the norm, 60 set, 6
matrix norm, 21, 22 one, 10 raising an index, 83 set theory, 6
matrix transpose, 41 operation, 9 range, 8 shear stresses, 121
maximum absolute column sum norm, 22 operation addition, 10 rank, 48 similar, 55, 69
maximum absolute row sum norm, 22 operation multiplication, 10 rational numbers, 7 similarity transformation, 55
maximum-norm, 20 order of a matrix, 40 Rayleigh quotient, 67, 68 simple fourth order tensor, 129
mean value, 153 origin, 12 real numbers, 7 simple second order tensor, 94, 99
metric, 16 orthogonal, 66 rectangular matrix, 40 simple third order tensor, 129
metric coef£cients, 138 orthogonal matrix, 57 reduction of rank, 66 skew part of a tensor, 115
metric space, 17 orthogonal tensor, 116 Riesz representation theorem, 36 smmetric tensor, 124
metric tensor of covariant coef£cients, 102 orthogonal transformation, 57 right-hand Cauchy strain tensor, 117 space, 12
mixed components, 99 orthonormal basis, 144 roots, 65 space curve, 135
mixed formulation of a second order tensor, outer product, 87 rotated coordinate system, 119 space of continuous functions, 14
99 overlined basis, 103 rotation matrix, 58 space of square matrices, 14
moment equilibrium conditon, 120 rotation of a vector £eld, 150 span, 15
moving trihedron, 137 parallelepiped, 88 special eigenvalue problem, 65
rotation transformation, 58
multiple roots, 70 partial derivatives, 134 spectral norm, 22
rotator, 116
multiplicative identity, 45 partial derivatives of base vectors, 145 square, 40
row, 40
multiplicative inverse, 10 permutation symbol, 87, 112, 128 square matrix, 40
row index, 40
permutation tensor, 128 Stoke’s theorem, 157
row matrix, 40, 45
n-tuple, 35 permutations, 50 strain tensor, 129
row vector, 40, 45
nabla operator, 146 point of origin, 28 stress state, 96
natural basis, 140 Poisson’s ratio, 129 scalar £eld, 143 stress tensor, 96, 129
natural numbers, 6, 7 polar decomposition, 117 scalar function, 133 stress vector, 96
naturals, 7 polynomial factorization, 66 scalar invariant, 149 subscript index, 78
negative de£nite, 62 polynomial of n-th degree, 65 scalar muliplicative identity, 12 subset, 7
Newton’s relation, 66 position vector, 135, 152 scalar multiplication, 9, 12, 13, 42 summation convention, 78
non empty set, 13 positive de£nite, 25, 61, 62, 111 scalar multiplication identity, 10 superscript index, 78
non-commutative, 45 positive metric, 16 scalar product, 9, 25, 85, 96 superset, 7
noncommutative, 69, 106 positive norm, 18 scalar product of tensors, 110 supremum, 22
nonsingular, 48, 59, 66 post-multiplication, 45 scalar product of two dyads, 111 surface, 152
TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003 TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003
278 Index