Sie sind auf Seite 1von 26

A Some Basic Rules of Tensor Calculus

The tensor calculus is a powerful tool for the description of the fundamentals in con-
tinuum mechanics and the derivation of the governing equations for applied prob-
lems. In general, there are two possibilities for the representation of the tensors and
the tensorial equations:
the direct (symbolic) notation and
the index (component) notation
The direct notation operates with scalars, vectors and tensors as physical objects
defined in the three dimensional space. A vector (first rank tensor) a is considered
as a directed line segment rather than a triple of numbers (coordinates). A second
rank tensor A is any finite sum of ordered vector pairs A = a b + . . . + c dd. The
scalars, vectors and tensors are handled as invariant (independent from the choice of
the coordinate system) objects. This is the reason for the use of the direct notation
in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199,
246, 313, 334] among others.
The index notation deals with components or coordinates of vectors and tensors.
For a selected basis, e.g. g i , i = 1, 2, 3 one can write
 
a = ai g i , A = ai b j + . . . + ci d j g i g j

Here the Einsteins summation convention is used: in one expression the twice re-
peated indices are summed up from 1 to 3, e.g.
3 3
ak g k ak g k , Aik bk Aik bk
k =1 k =1

In the above examples k is a so-called dummy index. Within the index notation the
basic operations with tensors are defined with respect to their coordinates, e. g. the
sum of two vectors is computed as a sum of their coordinates ci = ai + bi . The
introduced basis remains in the background. It must be remembered that a change
of the coordinate system leads to the change of the components of tensors.
In this work we prefer the direct tensor notation over the index one. When solv-
ing applied problems the tensor equations can be translated into the language
of matrices for a specified coordinate system. The purpose of this Appendix is to
168 A Some Basic Rules of Tensor Calculus

give a brief guide to notations and rules of the tensor calculus applied through-
out this work. For more comprehensive overviews on tensor calculus we recom-
mend [54, 96, 123, 191, 199, 311, 334]. The calculus of matrices is presented in
[40, 111, 340], for example. Section A.1 provides a brief overview of basic alge-
braic operations with vectors and second rank tensors. Several rules from tensor
analysis are summarized in Sect. A.2. Basic sets of invariants for different groups
of symmetry transformation are presented in Sect. A.3, where a novel approach to
find the functional basis is discussed.

A.1 Basic Operations of Tensor Algebra

A.1.1 Polar and Axial Vectors


A vector in the three-dimensional Euclidean space is defined as a directed line seg-
ment with specified magnitude (scalar) and direction. The magnitude (the length) of
a vector a is denoted by |a |. Two vectors a and b are equal if they have the same
direction and the same magnitude. The zero vector 0 has a magnitude equal to zero.
In mechanics two types of vectors can be introduced. The vectors of the first type are
directed line segments. These vectors are associated with translations in the three-
dimensional space. Examples for polar vectors include the force, the displacement,
the velocity, the acceleration, the momentum, etc. The second type is used to char-
acterize spinor motions and related quantities, i.e. the moment, the angular velocity,
the angular momentum, etc. Figure A.1a shows the so-called spin vector a which
represents a rotation about the given axis. The direction of rotation is specified by
the circular arrow and the magnitude of rotation is the corresponding length. For
the given spin vector a the directed line segment a is introduced according to the
following rules [334]:
1. the vector a is placed on the axis of the spin vector,

2. the magnitude of a is equal to the magnitude of a ,

a b c
a

a a a

Figure A.1 Axial vectors. a Spin vector, b axial vector in the right-screw oriented reference
frame, c axial vector in the left-screw oriented reference frame
A.1 Basic Operations of Tensor Algebra 169

3. the vector a is directed according to the right-handed screw, Fig A.1b, or


the left-handed screw, Fig A.1c
The selection of one of the two cases in 3. corresponds to the convention of ori-
entation of the reference frame [334] (it should be not confused with the right- or
left-handed triples of vectors or coordinate systems). The directed line segment is
called a polar vector if it does not change by changing the orientation of the refer-
ence frame. The vector is called to be axial if it changes the sign by changing the
orientation of the reference frame. The above definitions are valid for scalars and
tensors of any rank too. The axial vectors (and tensors) are widely used in the rigid
body dynamics, e.g. [333], in the theories of rods, plates and shells, e.g. [25], in the
asymmetric theory of elasticity, e.g. [231], as well as in dynamics of micro-polar
media, e.g. [108]. By dealing with polar and axial vectors it should be remembered
that they have different physical meanings. Therefore, a sum of a polar and an axial
vector has no sense.

A.1.2 Operations with Vectors


Addition. For a given pair of vectors a and b of the same type the sum c = a + b
is defined according to one of the rules in Fig. A.2. The sum has the following
properties
a + b = b + a (commutativity),
(a + b ) + c = a + (b + c ) (associativity),
a +0 = a

Multiplication by a Scalar. For any vector a and for any scalar a vector b = aa
is defined in such a way that
|b | = |||a |,
for > 0 the direction of b coincides with that of a ,
for < 0 the direction of b is opposite to that of a .
For = 0 the product yields the zero vector, i.e. 0 = 0aa . It is easy to verify that

(a + b ) = aa + bb , ( + )a = aa + aa

a b

b c c
b

a a
Figure A.2 Addition of two vectors. a Parallelogram rule, b triangle rule
170 A Some Basic Rules of Tensor Calculus

a b
b b

a a a
2 na = (b n a )n a
|a |
Figure A.3 Scalar product of two vectors. a Angles between two vectors, b unit vector and
projection

Scalar (Dot) Product of two Vectors. For any pair of vectors a and b a scalar
is defined by
= a b = |a ||b | cos ,
where is the angle between the vectors a and b . As one can use any of the two
angles between the vectors, Fig. A.3a. The properties of the scalar product are
a b = b a (commutativity),
a (b + c ) = a b + a c (distributivity)
Two nonzero vectors are said to be orthogonal if their scalar product is zero. The
unit vector directed along the vector a is defined by (see Fig. A.3b)
a
na =
|a |
The projection of the vector b onto the vector a is the vector (b n a )n a , Fig. A.3b.
The length of the projection is |b || cos |.
Vector (Cross) Product of Two Vectors. For the ordered pair of vectors a and
b the vector c = a b is defined in two following steps [334]:
the spin vector c is defined in such a way that
the axis is orthogonal to the plane spanned on a and b , Fig. A.4a,
the circular arrow shows the direction of the shortest rotation from a to b ,
Fig. A.4b,
the length is |a ||b | sin , where is the angle of the shortest rotation from a
to b ,
from the resulting spin vector the directed line segment c is constructed according
to one of the rules listed in Subsect. A.1.1.
The properties of the vector product are

a b = b a , a (b + c ) = a b + a c

The type of the vector c = a b can be established for the known types of the
vectors a and b , [334]. If a and b are polar vectors the result of the cross product
A.1 Basic Operations of Tensor Algebra 171

a b c

c
c


a b a b a b

Figure A.4 Vector product of two vectors. a Plane spanned on two vectors, b spin vector, c
axial vector in the right-screw oriented reference frame

will be the axial vector. An example is the moment of momentum for a mass point m
defined by r (mvv ), where r is the position of the mass point and v is the velocity
of the mass point. The next example is the formula for the distribution of velocities
in a rigid body v = r . Here the cross product of the axial vector (angular
velocity) with the polar vector r (position vector) results in the polar vector v.
The mixed product of three vectors a , b and c is defined by (a b ) c . The result
is a scalar. For the mixed product the following identities are valid

a (b c ) = b (c a ) = c (a b ) (A.1.1)

If the cross product is applied twice, the first operation must be set in parentheses,
e.g., a (b c ). The result of this operation is a vector. The following relation can
be applied
a (b c ) = b (a c ) c (a b ) (A.1.2)
By use of (A.1.1) and (A.1.2) one can calculate

(a b ) (c d ) = a [b (c d )]
= a (c b d d b c ) (A.1.3)
= ac bd ad bc

A.1.3 Bases
Any triple of linear independent vectors e 1 , e 2 , e 3 is called basis. A triple of vectors
e i is linear independent if and only if e 1 (e 2 e 3 ) 6= 0.
For a given basis e i any vector a can be represented as follows

a = a1 e 1 + a2 e 2 + a3 e 3 a i e i

The numbers ai are called the coordinates of the vector a for the basis e i . In order to
compute the coordinates the dual (reciprocal) basis e k is introduced in such a way
that 
k k 1, k = i,
e e i = i =
0, k 6= i
172 A Some Basic Rules of Tensor Calculus

ik is the Kronecker symbol. The coordinates ai can be found by

e i a = a e i = am e m e i = am m
i
= ai
For the selected basis e i the dual basis can be found from
e2 e3 e3 e1 e1 e2
e1 = , e2 = , e3 = (A.1.4)
(e 1 e 2 ) e 3 (e 1 e 2 ) e 3 (e 1 e 2 ) e 3
By use of the dual basis a vector a can be represented as follows

a = a1 e 1 + a2 e 2 + a3 e 3 a i e i , am = a e m , am 6= am

In the special case of the orthonormal vectors e i , i.e. |e i | = 1 and e i e k = 0 for


i 6= k, from (A.1.4) follows that e k = e k and consequently ak = ak .

A.1.4 Operations with Second Rank Tensors


A second rank tensor is a finite sum of ordered vector pairs A = a b + . . . + c d
[334]. One ordered pair of vectors is called the dyad. The symbol is called the
dyadic (tensor) product of two vectors. A single dyad or a sum of two dyads are
special cases of the second rank tensor. Any finite sum of more than three dyads can
be reduced to a sum of three dyads. For example, let
n
A= a (i) b (i)
i =1

be a second rank tensor. Introducing a basis e k the vectors a (i) can be represented
by a (i) = ak(i) e k , where ak(i) are coordinates of the vectors a (i) . Now we may write

n n n
A= ak(i)e k b (i) = e k ak(i)b (i) = e k d k , dk ak(i)b (i)
i =1 i =1 i =1

Addition. The sum of two tensors is defined as the sum of the corresponding
dyads. The sum has the properties of associativity and commutativity. In addition,
for a dyad a b the following operation is introduced

a (b + c ) = a b + a c , (a + b ) c = a c + b c
Multiplication by a Scalar. This operation is introduced first for one dyad. For
any scalar and any dyad a b
(a b ) = (aa ) b = a (bb ),
(A.1.5)
( + )a b = aa b + aa b
By setting = 0 in the first equation of (A.1.5) the zero dyad can be defined, i.e.
0(a b ) = 0 b = a 0 . The above operations can be generalized for any finite
sum of dyads, i.e. for second rank tensors.
A.1 Basic Operations of Tensor Algebra 173

Inner Dot Product. For any two second rank tensors A and B the inner dot prod-
uct is specified by A B . The rule and the result of this operation can be explained
in the special case of two dyads, i.e. by setting A = a b and B = c d

A B = a b c d = (b c )a d = aa d , bc

The result of this operation is a second rank tensor. Note that A B 6= B A . This
can be again verified for two dyads. The operation can be generalized for two second
rank tensors as follows
3 3 3 3
AB = a (i) b (i) c (k) d (k) = (b (i) c (k) )a (i) d (k)
i =1 k =1 i =1 k =1

Transpose of a Second Rank Tensor. The transpose of a second rank tensor


A is constructed by the following rule
!T
3 3
AT
A = a (i) b (i) = b (i) a (i)
i =1 i =1

Double Inner Dot Product. For any two second rank tensors A and B the double
inner dot product is specified by A B The result of this operation is a scalar. This
operation can be explained for two dyads as follows

A B = a b c d = (b c )(a d )

By analogy to the inner dot product one can generalize this operation for two second
rank tensors. It can be verified that A B = B A for second rank tensors A and
B . For a second rank tensor A and for a dyad a b

A a b = b A a (A.1.6)

A scalar product of two second rank tensors A and B is defined by

= A B T

One can verify that


A B T = B T A = B A T
Dot Products of a Second Rank Tensor and a Vector. The right dot product
of a second rank tensor A and a vector c is defined by
!
3 3
Ac = a (i) b (i) c = (b (i) c )a (i)
i =1 i =1

For a single dyad this operation is

a b c = a (b c )
174 A Some Basic Rules of Tensor Calculus

The left dot product is defined by


!
3 3
c A =c a (i) b (i) = (c a (i) )b (i)
i =1 i =1

The results of these operations are vectors. One can verify that

A c 6= c A , A c = c AT

Cross Products of a Second Rank Tensor and a Vector. The right cross
product of a second rank tensor A and a vector c is defined by
!
3 3
A c = a (i) b (i) c = a (i) (b (i) c )
i =1 i =1

The left cross product is defined by


!
3 3
cA =c a (i) b (i) = (c a (i) ) b (i)
i =1 i =1

The results of these operations are second rank tensors. It can be shown that

A c = [c A T ]T

Trace. The trace of a second rank tensor is defined by


!
3 3
tr A = tr a (i) b (i) = a (i) b (i)
i =1 i =1

By taking the trace of a second rank tensor the dyadic product is replaced by the dot
product. It can be shown that

tr A = tr A T , tr ( A B ) = tr (B A ) = tr ( A T B T ) = A B

Symmetric Tensors. A second rank tensor is said to be symmetric if it satisfies


the following equality
A = AT
An alternative definition of the symmetric tensor can be given as follows. A second
rank tensor is said to be symmetric if for any vector c 6= 0 the following equality is
valid
c A = A c
An important example of a symmetric tensor is the unit or identity tensor I , which
is defined by such a way that for any vector c

cI = Ic =c
A.1 Basic Operations of Tensor Algebra 175

The representations of the identity tensor are

I = ek ek = ek ek

for any basis e k and e k , e k e m = km . For three orthonormal vectors m , n and p

I = n n +m m + p p

A symmetric second rank tensor P satisfying the condition P P = P is called


projector. Examples of projectors are

m m, n n + p p = I m m,

where m , n and p are orthonormal vectors. The result of the dot product of the tensor
m m with any vector a is the projection of the vector a onto the line spanned on
the vector m , i.e. m m a = (a m )m . The result of (n n + p p ) a is the
projection of the vector a onto the plane spanned on the vectors n and p .
Skew-symmetric Tensors. A second rank tensor is said to be skew-symmetric
if it satisfies the following equality

A = A T

or if for any vector c


c A = A c
Any skew symmetric tensor A can be represented by

A = a I = I a

The vector a is called the associated vector. Any second rank tensor can be uniquely
decomposed into the symmetric and skew-symmetric parts
1  1 
A= A + AT + A A T = A1 + A2,
2   2
1
A1 = A + A , A 1 = A 1T ,
T
2
1 
A2 = A A T , A 2 = A 2T
2
Vector Invariant. The vector invariant or Gibbsian Cross of a second rank ten-
sor A is defined by
!
3 3
A = a (i) b (i) = a (i) b (i)
i =1 i =1

The result of this operation is a vector. The vector invariant of a symmetric tensor is
the zero vector. The following identities can be verified

(a I ) = 2aa , a I b = b a (a b ) I
176 A Some Basic Rules of Tensor Calculus

Linear Transformations of Vectors. A vector valued function of a vector ar-


gument f (a ) is called to be linear if f (1 a 1 + 2a 2 ) = 1 f (a 1 ) + 2 f (a 2 ) for any
two vectors a 1 and a 2 and any two scalars 1 and 2 . It can be shown that any linear
vector valued function can be represented by f (a ) = A a , where A is a second
rank tensor. In many textbooks, e.g. [32, 293] a second rank tensor A is defined to
be the linear transformation of the vector space into itself.
Determinant and Inverse of a Second Rank Tensor. Let a , b and c be ar-
bitrary linearly-independent vectors. The determinant of a second rank tensor A is
defined by
( A a ) [( A b ) ( A c )]
det A =
a (b c )
The following identities can be verified

det( A T ) = det( A ), det( A B ) = det( A ) det(B )

The inverse of a second rank tensor A 1 is introduced as the solution of the follow-
ing equation
A 1 A = A A 1 = I
A is invertible if and only if det A 6= 0. A tensor A with det A = 0 is called
singular. Examples of singular tensors are projectors.
Cayley-Hamilton Theorem. Any second rank tensor satisfies the following
equation
A 3 J1 ( A ) A 2 + J2 ( A ) A J3 ( A ) I = 0 , (A.1.7)
where A 2 = A A , A 3 = A A A and
1
J1 ( A ) = tr A , [(tr A )2 tr A 2 ],
J2 ( A ) =
2 (A.1.8)
1 1 1
J3 ( A ) = det A = (tr A )3 tr A tr A 2 + tr A 3
6 2 3
The scalar-valued functions Ji ( A ) are called principal invariants of the tensor A .
Coordinates of Second Rank Tensors. Let e i be a basis and e k the dual basis.
Any two vectors a and b can be represented as follows

a = ai e i = a j e j , b = b l e l = bm e m

A dyad a b has the following representations

a b = ai b j e i e j = ai b j e i e j = ai b j e i e j = ai b j e i e j

For the representation of a second rank tensor A one of the following four bases can
be used
ei e j, ei e j, ei e j, ei e j
With these bases one can write
A.1 Basic Operations of Tensor Algebra 177

j
A = Aije i e j = Aije i e j = Aije i e j = Ai e i e j

For a selected basis the coordinates of a second rank tensor can be computed as
follows
Aij = e i A e j , Aij = e i A e j ,
j
Aij = e i A e j , Ai = e i A e j
Principal Values and Directions of Symmetric Second Rank Tensors.
Consider a dot product of a second rank tensor A and a unit vector n . The resulting
vector a = A n differs in general from n both by the length and the direction.
However, one can find those unit vectors n , for which A n is collinear with n , i.e.
only the length of n is changed. Such vectors can be found from the equation

A n = n
n or ( A II ) n = 0 (A.1.9)

The unit vector n is called the principal vector and the scalar the principal value
of the tensor A . Let A be a symmetric tensor. In this case the principal values are
real numbers and there exist at least three mutually orthogonal principal vectors.
The principal values can be found as roots of the characteristic polynomial

det( A II ) = 3 + J1 ( A )2 J2 ( A ) + J3 ( A ) = 0

The principal values are specified by I , I I , I I I . For known principal values and
principal directions the second rank tensor can be represented as follows (spectral
representation)

A = In I n I + I In I I n I I + I I In I I I n I I I

Orthogonal Tensors. A second rank tensor Q is said to be orthogonal if it sat-


isfies the equation Q T Q = I . If Q operates on a vector its length remains un-
changed, i.e. let b = Q a , then

|b |2 = b b = a Q T Q a = a a = |a |2

Furthermore, the orthogonal tensor does not change the scalar product of two arbi-
trary vectors. For two vectors a and b as well as a = Q a and b = Q b one can
calculate
a b = a QT Q b = a b
From the definition of the orthogonal tensor follows

Q T = Q 1 , QT Q = Q QT = I,
det(Q Q T ) = (det Q )2 = det I = 1 det Q = 1

Orthogonal tensors with det Q = 1 are called proper orthogonal or rotation tensors.
The rotation tensors are widely used in the rigid body dynamics, e.g. [333], and in
the theories of rods, plates and shells, e.g. [25, 32]. Any orthogonal tensor is either
178 A Some Basic Rules of Tensor Calculus

the rotation tensor or the composition of the rotation (proper orthogonal tensor) and
the tensor I . Let P be a rotation tensor, det P = 1, then an orthogonal tensor Q
with det Q = 1 can be composed by

Q = ( I ) P = P ( I ), I ) det P = 1
det Q = det(

For any two orthogonal tensors Q 1 and Q 2 the composition Q 3 = Q 1 Q 2 is the or-
thogonal tensor, too. This property is used in the theory of symmetry and symmetry
groups, e.g. [232, 331]. Two important examples for orthogonal tensors are
rotation tensor about a fixed axis
Q (m
m ) = m m + cos ( I m m ) + sin m
m I,
< < , det Q = 1,

where the unit vector m represents the axis and is the angle of rotation,
reflection tensor
Q = I 2n n n , det Q = 1,
where the unit vector n represents a normal to the mirror plane.
One can prove the following identities [334]

(Q a ) (Q b ) = det Q Q (a b )
(A.1.10)
Q (a Q T ) = Q (a I ) Q T = det Q [(Q a ) I ]

A.2 Elements of Tensor Analysis

A.2.1 Coordinate Systems


The vector r characterizing the position of a point P can be represented by use of
the Cartesian coordinates xi as follows, Fig. A.5,

r ( x1 , x2 , x3 ) = x1 e 1 + x2 e 2 + x3 e 3 = x i e i

Instead of coordinates xi one can introduce any triple of curvilinear coordinates


q1 , q2 , q3 by means of one-to-one transformations

x k = x k ( q1 , q2 , q3 ) qk = qk ( x1 , x2 , x3 )

It is assumed that the above transformations are continuous and continuous differ-
entiable as many times as necessary and for the Jacobians
!
xk
 i
q
det i
6= 0, det 6= 0
q xk
A.2 Elements of Tensor Analysis 179

q3
r3

q2
r2
t
c ons
q =
3
P
x3 r1
r

e3
q1
e1 e2 x2

x1
Figure A.5 Cartesian and curvilinear coordinates

must be valid. With these assumptions the position vector can be considered as a
function of curvilinear coordinates qi , i.e. r = r (q1 , q2 , q3 ). Surfaces q1 = const,
q2 = const, and q3 = const, Fig. A.5, are called coordinate surfaces. For given
fixed values q2 = q2 and q3 = q3 a curve can be obtained along which only q1
varies. This curve is called the q1 -coordinate line, Fig. A.5. Analogously, one can
obtain the q2 - and q3 -coordinate lines. The partial derivatives of the position vector
with respect the to selected coordinates

rr rr rr
r1 = , r2 = , r3 = , r 1 (r 2 r 3 ) 6= 0
q1 q2 q3

define the tangential vectors to the coordinate lines in a point P, Fig. A.5. The vec-
tors r i are used as the local basis in the point P. By use of (A.1.4) the dual basis
r k can be introduced. The vector drr connecting the point P with a point P in the
differential neighborhood of P is defined by

rr 1 rr rr
drr = 1
dq + 2 dq2 + 3 dq3 = r k dqk
q q q

The square of the arc length of the line element in the differential neighborhood of
P is calculated by

ds2 = drr drr = (r i dqi ) (r k dqk ) = gik dqi dqk ,

where gik r i r k are the so-called contravariant components of the metric tensor.
With gik one can represent the basis vectors r i by the dual basis vectors r k as follows

r i = (r i r k )r k = gikr k
180 A Some Basic Rules of Tensor Calculus

Similarly
r i = (r i r k )r k = gikr k , gik r i r k ,
where gik are termed covariant components of the metric tensor. For the selected
bases r i and r k the second rank unit tensor has the following representations

I = r i r i = r i gikr k = gikr i r k = gikr i r k = r i r i

A.2.2 The Hamilton (Nabla) Operator


A scalar field is a function which assigns a scalar to each spatial point P for the
domain of definition. Let us consider a scalar field (r ) = (q1 , q2 , q3 ). The total
differential of by moving from a point P to a point P in the differential neighbor-
hood is

d = 1 dq1 + 2 dq2 + 3 dq3 = k dqk
q q q q
Taking into account that dqk = drr r k


d = drr r k = drr
qk

The vector is called the gradient of the scalar field and the invariant operator
(the Hamilton or nabla operator) is defined by

= rk
qk

For a vector field a (r ) one may write

aa aa
daa = (drr r k )
= r
dr r k
a ) T dr ,
= drr a = (
qk qk
aa
a = rk k
q

The gradient of a vector field is a second rank tensor. The operation can be applied
to tensors of any rank. For vectors and tensors the following additional operations
are defined
aa
divaa a = r k k
q
aa
rotaa a = r k k
q
The following identities can be verified

rr
r = rk = rk rk = I, r = 3
qk
A.2 Elements of Tensor Analysis 181

For a scalar , a vector a and for a second rank tensor A the following identities are
valid
 
k (aa ) k aa
(aa ) = r k
= r k
a + rr k k = ( ) a + a ,
q q q
(A.2.1)
k ( A a ) k A A k aa
(A a) = r = r k a+r A k
qk q  q
aa (A.2.2)
= ( A ) a + A k
rk
q
= ( a )T
A ) a + A (
Here the identity (A.1.6) is used. For a second rank tensor A and a position vector r
one can prove the following identity

( A r ) A
A rr
(A r ) = r k k
= rk k r + rk A k
q q q (A.2.3)
= A ) r + r k A r k = (
( A) r A

Here we used the definition of the vector invariant as follows


 
A = r k r k A = r k (r k A ) = r k A r k

A.2.3 Integral Theorems


Let (r ), a (r ) and A (r ) be scalar, vector and second rank tensor fields. Let V be
the volume of a bounded domain with a regular surface A(V ) and n be the outer
unit normal to the surface at r . The integral theorems can be summarized as follows
Gradient Theorems
Z Z
dV = n dA,
V
Z A (ZV )

a dV = n a dA,
V
Z A (ZV )

A dV = n A dA
V A (V )

Divergence Theorems
Z Z
a dV = n a dA,
V
Z A (ZV )

A dV = n A dA
V A (V )
182 A Some Basic Rules of Tensor Calculus

Curl Theorems
Z Z
a dV = n a dA,
V
Z A (ZV )

A dV = n A dA
V A (V )

A.2.4 Scalar-Valued Functions of Vectors and Second


Rank Tensors
Let be a scalar valued function of a vector a and a second rank tensor A , i.e.
= (a , A ). Introducing a basis e i the function can be represented as follows

(a , A ) = ( ai e i , Aije i e j ) = ( ai , Aij )

The partial derivatives of with respect to a and A are defined according to the
following rule
i
d = i
da + ij
dAij
a A (A.2.4)
i
= daa e i + dA A e j e i dAij
a Aij
In the coordinates-free form the above rule can be rewritten as follows

T
 
T
d = daa + dA
A = daa ,aa + dA
A (,A
A) (A.2.5)
aa A
A
with
i
,aa = i e i , ,AA = e ej
aa a AA Aij
One can verify that ,aa and ,A
A are independent from the choice of the basis. One
may prove the following formulae for the derivatives of principal invariants of a
second rank tensor A
T
J1 ( A ),A
A = I, J1 ( A 2 ),A AT,
A = 2A J1 ( A 3 ),A A2 ,
A = 3A
J2 ( A ),A
A = J1 ( A ) I A T , (A.2.6)
2T
J3 ( A ),A
A = A J1 ( A ) A T + J2 ( A ) I = J3 ( A )( A T )1

A.3 Orthogonal Transformations and Orthogonal


Invariants
An application of the theory of tensor functions is to find a basic set of scalar invari-
ants for a given group of symmetry transformations, such that each invariant relative
A.3 Orthogonal Transformations and Orthogonal Invariants 183

to the same group is expressible as a single-valued function of the basic set. The ba-
sic set of invariants is called functional basis. To obtain a compact representation
for invariants, it is required that the functional basis is irreducible in the sense that
removing any one invariant from the basis will imply that a complete representation
for all the invariants is no longer possible.
Such a problem arises in the formulation of constitutive equations for a given
group of material symmetries. For example, the strain energy density of an elastic
non-polar material is a scalar valued function of the second rank symmetric strain
tensor. In the theory of the Cosserat continuum two strain measures are introduced,
where the first strain measure is the polar tensor while the second one is the axial
tensor, e.g. [108]. The strain energy density of a thin elastic shell is a function of
two second rank tensors and one vector, e.g. [25]. In all cases the problem is to find
a minimum set of functionally independent invariants for the considered tensorial
arguments.
For the theory of tensor functions we refer to [71]. Representations of tensor
functions are reviewed in [280, 330]. An orthogonal transformation of a scalar , a
vector a and a second rank tensor A is defined by [25, 332]

(det Q ) , a (det Q ) Q a , A (det Q ) Q A Q T , (A.3.1)

where Q is an orthogonal tensor, i.e. Q Q T = I , det Q = 1, I is the second


rank unit tensor, = 0 for absolute (polar) scalars, vectors and tensors and = 1
for axial ones. An example of the axial scalar is the mixed product of three polar
vectors, i.e. = a (b c ). A typical example of the axial vector is the cross product
of two polar vectors, i.e. c = a b . An example of the second rank axial tensor
is the skew-symmetric tensor W = a I , where a is a polar vector. Consider a
group of orthogonal transformations S (e.g., the material symmetry transformations)
characterized by a set of orthogonal tensors Q . A scalar-valued function of a second
rank tensor f = f ( A ) is called to be an orthogonal invariant under the group S if

Q S : f ( A ) = (det Q ) f ( A ), (A.3.2)

where = 0 if values of f are absolute scalars and = 1 if values of f are axial


scalars.
Any second rank tensor B can be decomposed into the symmetric and the skew-
symmetric part, i.e. B = A + a I , where A is the symmetric tensor and a is the
associated vector. Therefore f (B ) = f ( A , a ). If B is a polar (axial) tensor, then a is
an axial (polar) vector. For the set of second rank tensors and vectors the definition
of an orthogonal invariant (A.3.2) can be generalized as follows

Q S : f ( A 1 , A 2 , . . . , A n , a 1 , a 2 , . . . , a k )
= (det Q ) f ( A 1 , A 2 , . . . A n , a 1 , a 2 , . . . , a k ), A i = A iT
(A.3.3)
184 A Some Basic Rules of Tensor Calculus

A.3.1 Invariants for the Full Orthogonal Group


In [335] orthogonal invariants for different sets of second rank tensors and vectors
with respect to the full orthogonal group are presented. It is shown that orthogonal
invariants are integrals of a generic partial differential equation (basic equations for
invariants). Let us present two following examples
Orthogonal invariants of a symmetric second rank tensor A are
Ik = tr A k , k = 1, 2, 3
Instead of Ik it is possible to use the principal invariants Jk defined by (A.1.8).
Orthogonal invariants of a symmetric second rank tensor A and a vector a are

Ik = tr A k , k = 1, 2, 3, I4 = a a , I5 = a A a ,
(A.3.4)
I6 = a A 2 a , I7 = a A 2 (a A a )
In the above set of invariants only 6 are functionally independent. The relation
between the invariants (so-called syzygy, [71]) can be formulated as follows

I4 I5 I6
2 3

I7 = I5
I6 a A a , (A.3.5)
I6 a A 3 a a A 4 a

where a A 3 a and a A 4 a can be expressed by Il , l = 1, . . . 6 applying the


Cayley-Hamilton theorem (A.1.7).
The set of invariants for a symmetric second rank tensor A and a vector a can be
applied for a non-symmetric second rank tensor B since it can be represented by
B = A + a I, A = AT.

A.3.2 Invariants for the Transverse Isotropy Group


Transverse isotropy is an important type of the symmetry transformation due to a
variety of applications. Transverse isotropy is usually assumed in constitutive mod-
eling of fiber reinforced materials, e.g. [21], fiber suspensions, e.g. [22], direction-
ally solidified alloys, e.g. [213], deep drawing sheets, e.g. [50, 57] and piezoelectric
materials, e.g. [285]. The invariants and generating sets for tensor-valued functions
with respect to different cases of transverse isotropy are discussed in [79, 328] (see
also relevant references therein). In what follows we analyze the problem of a func-
tional basis within the theory of linear first order partial differential equations rather
than the algebra of polynomials. We develop the idea proposed in [335] for the in-
variants with respect to the full orthogonal group to the case of transverse isotropy.
The invariants will be found as integrals of the generic partial differential equa-
tions. Although a functional basis formed by these invariants does not include any
redundant element, functional relations between them may exist. It may be there-
fore useful to find out simple forms of such relations. We show that the proposed
approach may supply results in a direct, natural manner.
A.3 Orthogonal Transformations and Orthogonal Invariants 185

Invariants for a Single Second Rank Symmetric Tensor. Consider the


proper orthogonal tensor which represents a rotation about a fixed axis, i.e.

Q ( m
m ) = m m + cos ( I m m ) + sin m
m I, det Q ( mm ) = 1,
(A.3.6)
where m is assumed to be a constant unit vector (axis of rotation) and denotes
the angle of rotation about m . The symmetry transformation defined by this tensor
corresponds to the transverse isotropy, whereby five different cases are possible, e.g.
[299, 331]. Let us find scalar-valued functions of a second rank symmetric tensor A
satisfying the condition

f ( A ( )) = f (Q ( m
m ) A Q T ( m A ( ) Q ( m
m )) = f ( A ), m ) A Q T ( m
m)
(A.3.7)
Equation (A.3.7) must be valid for any angle of rotation . In (A.3.7) only the left-
hand side depends on . Therefore its derivative with respect to can be set to zero,
i.e.
A f T
 
df dA
= =0 (A.3.8)
d d AA
The derivative of A with respect to can be calculated by the following rules

A ( ) = dQ
dA m ) A Q T ( m
Q ( m m ) + Q ( m Q T ( m
m ) A dQ m ),
Q ( m
dQ m ) = m Q ( m
m )d
dQQ T ( m
m ) = Q T ( m
m ) m d
(A.3.9)
By inserting the above equations into (A.3.8) we obtain

f T
 
(m A A m ) =0 (A.3.10)
AA
Equation (A.3.10) is classified in [92] to be the linear homogeneous first order par-
tial differential equation. The characteristic system of (A.3.10) is
A
dA
= (m A A m ) (A.3.11)
ds
Any system of n linear ordinary differential equations has not more then n 1
functionally independent integrals [92]. By introducing a basis e i the tensor A can
be written down in the form A = Aije i e j and (A.3.11) is a system of six ordi-
nary differential equations with respect to the coordinates Aij . The five integrals of
(A.3.11) may be written down as follows

gi ( A ) = c i , i = 1, 2, . . . , 5,

where ci are integration constants. Any function of the five integrals gi is the so-
lution of the partial differential equation (A.3.10). Therefore the five integrals gi
represent the invariants of the symmetric tensor A with respect to the symmetry
transformation (A.3.6). The solutions of (A.3.11) are
186 A Some Basic Rules of Tensor Calculus

A k (s) = Q (sm
m ) A 0k Q T (sm
m ), k = 1, 2, 3, (A.3.12)
where A 0 is the initial condition. In order to find the integrals, the variable s must
be eliminated from (A.3.12). Taking into account the following identities
tr (Q A k Q T ) = tr (Q T Q A k ) = tr A k , m Q (sm
m) = m,
(A.3.13)
(Q a ) (Q b ) = (det Q )Q (a b )
and using the notation Q m Q (sm
m ) the integrals can be found as follows

tr ( A k ) = tr ( A 0k ), k = 1, 2, 3,
m Al m = m Q m A 0l Q m
T m

= m A 0l m , l = 1, 2,
m A 2 (m A m ) = m Qm T A 2 Q (m Q T A Q m )
0 m m 0 m

= m A 20 Q m (Q mT m ) (Q T A m )
 
m 0

= m A 20 (m A 0 m )
(A.3.14)
As a result we can formulate the six invariants of the tensor A with respect to the
symmetry transformation (A.3.6) as follows
Ik = tr ( A k ), k = 1, 2, 3, I4 = m A m ,
(A.3.15)
I5 = m A 2 m , I6 = m A 2 (m A m )
The invariants with respect to various symmetry transformations are discussed in
[79]. For the case of the transverse isotropy six invariants are derived in [79] by the
use of another approach. In this sense our result coincides with the result given
in [79]. However, from our derivations it follows that only five invariants listed
in (A.3.15) are functionally independent. Taking into account that I6 is the mixed
product of vectors m , A m and A 2 m the relation between the invariants can be
written down as follows

1 I4 I5
I26 = det I4 I5 m A3 m (A.3.16)
I5 m A m m A 4 m
3

One can verify that m A 3 m and m A 4 m are transversely isotropic invari-


ants, too. However, applying the the Cayley-Hamilton theorem (A.1.7) they can be
uniquely expressed by I1 , I2 , . . . I5 in the following way [54]
m A3 m = J1 I5 + J2 I4 + J3 ,
m A4 m = ( J12 + J2 ) I5 + ( J1 J2 + J3 ) I4 + J1 J3 ,
where J1 , J2 and J3 are the principal invariants of A defined by (A.1.8). Let us
note that the invariant I6 cannot be dropped. In order to verify this, it is enough to
consider two different tensors
A.3 Orthogonal Transformations and Orthogonal Invariants 187

A and B = Q n A Q nT ,

where

Q n Q (n
n ) = 2n
n n I, n n = 1, n m = 0, det Q n = 1

One can prove that the tensor A and the tensor B have the same invariants
I1 , I2 , . . . , I5 . Taking into account that m Q n = m and applying the last iden-
tity in (A.3.13) we may write

I6 (B ) = m B 2 (m B m ) = m A 2 Q nT (m Q n A m )
= m A 2 (m A m ) = I6 ( A )
We observe that the only difference between the two considered tensors is the sign
of I6 . Therefore, the triples of vectors m , A m , A 2 m and m , B m , B 2 m have
different orientations and cannot be combined by a rotation. It should be noted that
the functional relation (A.3.16) would in no way imply that the invariant I6 should
be dependent and hence redundant, namely should be removed from the basis
(A.3.15). In fact, the relation (A.3.16) determines the magnitude but not the sign of
I6 .
To describe yielding and failure of oriented solids a dyad M = v v has been
used in [53, 75], where the vector v specifies a privileged direction. A plastic po-
tential is assumed to be an isotropic function of the symmetric Cauchy stress tensor
and the tensor generator M . Applying the representation of isotropic functions the
integrity basis including ten invariants was found. In the special case v = m the
number of invariants reduces to the five I1 , I2 , . . . I5 defined by (A.3.15). Further de-
tails of this approach and applications in continuum mechanics are given in [59, 71].
However, the problem statement to find an integrity basis of a symmetric tensor A
and a dyad M , i.e. to find scalar valued functions f ( A , M ) satisfying the condition

f (Q A Q T , Q M Q T ) = (det Q ) f ( A , M ),
(A.3.17)
Q , Q QT = I, det Q = 1

essentially differs from the problem statement (A.3.7). In order to show this we
take into account that the symmetry group of a dyad M , i.e. the set of orthogonal
solutions of the equation Q M Q T = M includes the following elements

Q 1,2 = I ,
v
Q3 = Q ( m
m ), m=, (A.3.18)
|v |
Q4 = Q (n
n ) = 2n
n n I, n n = 1, n v = 0,

where Q ( m
m ) is defined by (A.3.6). The solutions of the problem (A.3.17) are
automatically the solutions of the following problem

f (Q i A Q iT , M ) = (det Q i ) f ( A , M ), i = 1, 2, 3, 4,
188 A Some Basic Rules of Tensor Calculus

i.e. the problem to find the invariants of A relative to the symmetry group (A.3.18).
However, (A.3.18) includes much more symmetry elements if compared to the prob-
lem statement (A.3.7).
An alternative set of transversely isotropic invariants can be formulated by the
use of the following decomposition

A = m
m m + ( I m m ) + A pD + t m + m t, (A.3.19)

where , , A pD and t are projections of A . With the projectors P 1 = m m and


P 2 = I m m we may write

= m A m = tr ( A P 1 ),
1 1
= (tr A m A m ) = tr ( A P 2 ),
2 2 (A.3.20)
A pD = P 2 A P 2 P
P2,
t = m A P2
The decomposition (A.3.19) is the analogue to the following representation of a
vector a

a = I a = m m a + ( I m m ) a = m
m + , = a m,
= P2 a
(A.3.21)
Decompositions of the type (A.3.19) are applied in [68, 79]. The projections intro-
duced in (A.3.20) have the following properties

tr ( A pD ) = 0, A pD m = m A pD = 0 , tm = 0 (A.3.22)

With (A.3.19) and (A.3.22) the tensor equation (A.3.11) can be transformed to the
following system of equations

d

= 0,
ds



d


= 0,



ds (A.3.23)
dA A pD
= pD pD ,
m A A m




ds
dtt


= m t
ds
From the first two equations we observe that and are transversely isotropic in-
variants. The third equation can be transformed to one scalar and one vector equation
as follows
A pD
dA d( A pD A pD ) dbb
A pD = 0 = 0, = m b
ds ds ds
with b A pD t . We observe that tr ( A 2pD ) = A pD A pD is the transversely
isotropic invariant, too. Finally, we have to find the integrals of the following system
A.3 Orthogonal Transformations and Orthogonal Invariants 189

dtt


= t m,
ds (A.3.24)
dbb = b m

ds
The solutions of (A.3.24) are

t (s) = Q (sm
m) t 0, b (s) = Q (sm
m) b 0,

where t 0 and b 0 are initial conditions. The vectors t and b belong to the plane of
isotropy, i.e. t m = 0 and b m = 0. Therefore, one can verify the following
integrals

t t = t0 t0, b b = b0 b 0, t b = t0 b 0,
(t b ) m = (t 0 b 0 ) m
(A.3.25)
We found seven integrals, but only five of them are functionally independent. In
order to formulate the relation between the integrals we compute

b b = t A 2pD t , t b = t A pD t

For any plane tensor A p satisfying the equations A p m = m A p = 0 the Cayley-


Hamilton theorem can be formulated as follows, see e.g. [71]
1h i
A 2p (tr A p ) A p + (tr A p )2 tr ( A 2p ) ( I m m ) = 0
2
Since tr A pD = 0 we have

1
A 2pD = tr ( A 2pD )( I m m ),
2A t A 2pD t = tr ( A 2pD )(t t )
2
Because tr ( A 2pD ) and t t are already defined, the invariant b b can be omitted.
The vector t b is spanned on the axis m . Therefore
t b = m
m, = (t b ) m ,
2 = (t b ) (t b ) = (t t )(b b ) (t b )2

Now we can summarize six invariants and one relation between them as follows
1
I1 = , I2 = , I3 = tr ( A 2pD ), I4 = t t = t A m ,
2
I5 = t A pD t , I6 = (t A pD t ) m , (A.3.26)
I62 = I42 I3 I52

Let us assume that the symmetry transformation Q n Q (n n ) belongs to the


symmetry group of the transverse isotropy, as it was made in [71, 59]. In this case
f ( A ) = f (Q n A Q nT ) = f ( A ) must be valid. With Q n m = m we can write

= , = , A pD = A pD , t = Q n t
190 A Some Basic Rules of Tensor Calculus

Therefore in (A.3.26) Ik = Ik , k = 1, 2, . . . , 5 and

I6 = (t A pD t ) m = (Q n t ) Q n A pD t m


= (t A pD t ) Q n m = (t A pD t ) m = I6
Consequently
f ( A ) = f ( I1 , I2 , . . . , I5 , I6 ) = f ( I1 , I2 , . . . , I5 , I6 )
f ( A ) = f ( I1 , I2 , . . . , I5 , I62 )

and I62 can be omitted due to the last relation in (A.3.26).


Invariants for a Set of Vectors and Second Rank Tensors. By setting
Q = Q ( m m ) in (A.3.3) and taking the derivative of (A.3.3) with respect to results
in the following generic partial differential equation
n 
f T k

f
AA i (m A i A i m ) + aa j (m a j ) = 0 (A.3.27)
i =1 j =1

The characteristic system of (A.3.27) is


dAAi


= (m A i A i m ), i = 1, 2, . . . , n,
ds (A.3.28)
daa j = m a , j = 1, 2, . . . , k

j
ds
The above system is a system of N ordinary differential equations, where N = 6n +
3k is the total number of coordinates of A i and a j for a selected basis. The system
(A.3.28) has not more then N 1 functionally independent integrals. Therefore we
can formulate:
Theorem A.3.1. A set of n symmetric second rank tensors and k vectors with
N = 6n + 3k independent coordinates for a given basis has not more than N 1
functionally independent invariants for N > 1 and one invariant for N = 1 with
respect to the symmetry transformation Q ( m
m ).
In essence, the proof of this theorem is given within the theory of linear first order
partial differential equations [92].
As an example let us consider the set of a symmetric second rank tensor A and
a vector a . This set has eight independent invariants. For a visual perception it is
useful to keep in mind that the considered set is equivalent to
A, a, A a, A2 a
Therefore it is necessary to find the list of invariants, whose fixation determines this
set as a rigid whole. The generic equation (A.3.27) takes the form

f T
 
f
(m A A m ) + (m a ) = 0 (A.3.29)
A
A aa
A.3 Orthogonal Transformations and Orthogonal Invariants 191

The characteristic system of (A.3.29) is


A
dA daa
= m A A m, = m a (A.3.30)
ds ds
This system of ninth order has eight independent integrals. Six of them are invariants
of A and a with respect to the full orthogonal group. They fix the considered set as
a rigid whole. The orthogonal invariants are defined by Eqs (A.3.4) and (A.3.5).
Let us note that the invariant I7 in (A.3.4) cannot be ignored. To verify this it is
enough to consider two different sets
A, a and B = Q p A Q Tp , a,
where Q p = I 2pp p , p p = 1, p a = 0. One can prove that the invariants
I1 , I2 , . . . , I6 are the same for these two sets. The only difference is the invariant I7 ,
i.e. a B 2 (a B a ) = a A 2 (a A a ) Therefore the triples of vectors a , A
a , A 2 a and a , B a , B 2 a have different orientations and cannot be combined by
a rotation. In order to fix the considered set with respect to the unit vector m it is
enough to fix the next two invariants
I8 = m A m , I9 = m a (A.3.31)
The eight independent transversely isotropic invariants are (A.3.4), (A.3.5) and
(A.3.31).

A.3.3 Invariants for the Orthotropic Symmetry Group


Consider orthogonal tensors Q 1 = I 2n n 1 n 1 and Q 2 = I 2n n2 n 2,
det Q 1 = det Q 2 = 1. These tensors represent the mirror reflections, whereby
the unit orthogonal vectors n 1 and n 2 , are the normal directions to the mirror
planes. The above tensors are the symmetry elements of the orthotropic symmetry
group. The invariants must be found from
f (Q 1 A Q 1T ) = f (Q 2 A Q 2T ) = f ( A )
Consequently,
f (Q 1 Q 2 A Q 2T Q 1T ) = f (Q 1 A Q 1T ) = f (Q 2 A Q 2T ) = f ( A )
and the tensor Q 3 = Q 1 Q 2 = 2 n3 n 3 I belongs to the symmetry group, where
the unit vector n 3 is orthogonal to n 1 and n 2 . Taking into account that Q i n i = n i
(no summation convention), Q i n j = n j , i 6= j and using the notation A i =
Q i A Q iT we can write
tr ( A k ) = tr ( A k ), k = 1, . . . , 3, i = 1, 2, 3
ni A ni = n i Q i A Q iT n i
= ni A ni, i = 1, 2, 3 (A.3.32)
ni A 2 n i = n i Q i A 2 Q iT n i
= n i A 2 n i , i = 1, 2, 3
192 A Some Basic Rules of Tensor Calculus

The above set of includes 9 scalars. The number of independent scalars is 7 due to
the obvious relations

tr ( A k ) = n 1 A k n 1 + n 2 A k n 2 + n 3 A k n 3 , k = 1, 2, 3

Das könnte Ihnen auch gefallen