Sie sind auf Seite 1von 45

Semester B: BC 5006

Advanced Mechanics
Lecturer: Dr C.W. Lim
Address: Department of Building and Construction
City University of Hong Kong
Tat Chee Avenue, Kowloon, Hong Kong
Room: B6406
Tel: 3443 7285
Fax: 3442 7612
E-mail bccwlim@cityu.edu.hk

Course Contents:
1.0 Tensor Mechanics in Cartesian Coordinate Systems
1.1 Notation and Summation Convention
1.2 Transformations of Rectangular Cartesian Coordinate System
1.3 Transformation Laws for Vectors
1.4 Cartesian Tensors
1.5 Stress Tensor
1.6 Algebra of Cartesian Tensors
1.7 Principal Axes of Second Order Tensors
1.8 Differentiation of Cartesian Tensor Fields
1.9 Strain Tensors

2.0 Tensors in General Coordinates


2.1 Oblique Cartesian Coordinates
2.2 Reciprocal Basis; Transformations of Oblique Coordinate Systems
2.3 Tensors in Oblique Cartesian Coordinate Systems
2.4 Algebra of Tensors in Oblique Coordinates
2.5 The Metric Tensor
2.6 Transformations of Curvilinear Coordinates
2.7 General Tensors
2.8 Covariant Derivative of a Vector
2.9 Transformation Christoffel Symbols
2.10 Covariant Derivative of Tensors
2.11 Gradient, Divergence, Laplacian, and Curl in General Coordinates
2.12 Equations of Motion of a Particle

References:
• Y.C. Fung, Foundations of Solid Mechanics, Prentice Hall, New Jersey, 1965.
• Eutiquio C. Young, Vector and Tensor Analysis, 2nd edition, Marcel Dekker, New
York, 1993.
• T.W.B. Kibble and F.H. Berkshire, Classical Mechanics, 4th edition, Longman,
England, 1996.
Chapter 1
Tensor Mechanics in Cartesian Coordinate Systems

1.1 Notation and Summation Convention


Denote the orthonormal basis i , j , k by i1 , i 2 , i 3 or simply i i as shown in Fig.

1.1, then the position vector R of a point with coordinate xi is


3
R = ∑ xi i i or R = xi i i (1.1)
i =1

The analytic representation of a vector A with component ai is


3
A = ∑ ai i i or A = ai i i (1.2)
i =1
where summation from 1 to 3 with respect to the repeated index i is implied.

Fig. 1.1: A position vector in Cartesian coordinate systems.

Example 1. The term aii implies

aii = a11 + a22 + a33 .

Example 2. The term ai bi implies

ai bi = a1b1 + a2b2 + a3b3 .

Example 3. The expression ai b j − a j bi does not imply any summation because none
of the indices is repeated in the same term. The expression stands for
the nine quantities
cij = ai b j − a j bi (i, j = 1,2,3)

Example 4. The term aij b jk implies summation with respect to the repeated index j

from 1 to 3. Thus,
aij b jk = ai1b1k + ai 2 b2 k + ai 3 b3k = cik (i, k = 1,2,3)

Example 5. Let φ ( x1 , x 2 , x3 ) be a scalar field. Then

∂φ ∂φ ∂φ ∂φ
grad φ (x1 , x 2 , x3 ) = ij = i1 + i2 + i3
∂x j ∂x1 ∂x 2 ∂x3

Example 6. Let φ ( x1 , x 2 , x3 ) be a scalar field, where xi = xi ( t ) , i = 1, 2,3 . Then

dφ ( xi ) ∂φ dxi ∂φ dx1 ∂φ dx 2 ∂φ dx3


= = + +
dt ∂xi dt ∂x1 dt ∂x 2 dt ∂x3 dt

2
1.2 Transformations of Rectangular Cartesian Coordinate System
Let xi (original system) and xi (transformed system) be two right-handed
rectangular Cartesian coordinate systems with common origin and the corresponding
base vectors i i and ii ( i = 1, 2,3) as shown in Fig. 1.2.

Fig. 1.2: Transformation of Cartesian coordinate systems.

Consider a point P whose coordinates with respect to the old and the new coordinate
systems are xi and xi , respectively. Then R = xi i i is the position vector of the point

with respect to the old system, and R = xi i i the position vector with respect to the

new system. Since R = R , we have


xi i i = x j i j (1.3)

To solve for xi in terms of xi , we take the dot product of both sides of Eq. (1.3) with

i k (1 ≤ k ≤ 3) . Since

1, i = k 
i i ⋅ i k = δ ik =  
0, i ≠ k 
we find
xiδ ik = (i j )
⋅ ik xj (1 ≤ k ≤ 3)
or
=xi α=
ij x j ( i 1, 2,3) (1.4)
where we define
i i ⋅ i j ( i, j =
α ij = 1, 2,3) (1.5)
where δ ij is the Kronecker delta.

3
Transformation (1.4) is said to define a rotation of coordinate axes if both the
old and new systems are right-handed and the axes of both systems can be brought to
coincide by rotation such that the xi -axis coincides with the xi -axis for i = 1, 2,3 as
shown in Fig. 1.3.

Fig. 1.3: Rotation of coordinate axes.

If the transformation (1.4) carries a right-handed system into a left-handed


system (or vice versa), the transformation will consist of a rotation and a reflection.
For example, the transformation
x1 = x2 , x2 =− x1 , x3 =
− x3
defines a left-handed coordinate system xi (assuming xi to be right-handed). The

corresponding axes can be made to coincide with each other by rotation about the x3 -

axis followed by reflection of the x3 - or x3 -axis, as shown in Fig. 1.4.

Fig. 1.4: Rotation and reflection of coordinate axes.

4
From (1.5) and the geometric property of dot product, it follows that
α ij = i i ⋅ i j = i i i j cos θij = cos θij ( i, j = 1, 2,3)
so that the quantity α ij is the cosine of the angle θ ij between the xi -axis and the x j -

axis for i, j = 1,2,3 .

On the other hand, if we take the dot product of i k with both sides of (1.3),

δ ik , we find
noting that i i ⋅ i k =

xiδ=
ik ( i ⋅i ) x
j k j

which, by (1.5), yields


= xi α= ji x j ( i 1, 2,3) (1.6)
This is the inverse of the coordinate transformation (1.4).

The relationship between the quantities α ij =( i i ⋅ i j ) in (1.4) and α ji =( i j ⋅ i i ) in

(1.6) is easily seen when we substitute (1.6) for xi in (1.4) to obtain the identity

= ij (α kj xk )
xi α= (α α ) x
ij kj k

which implies that


α ijα kj = δ ik (1.7)
Similarly, if we substitute (1.4) for xi in (1.6), we obtain

xi = (α jiα jk )xk
which implies
α jiα jk = δ ik (1.8)

The coefficients of the transformation (1.4) can be represented as a 3× 3 matrix


α11 α12 α13 
A (α α 
= = ij )  21 α 22 α 23  (1.9)
α 31 α 32 α 33 
called the transformation matrix. By (1.7), the rows of the matrix (1.9) forms an
orthonormal set of row vectors. Similarly, by (1.8) the columns form an orthonormal
set of column vectors.

The matrix corresponding to the inverse transformation (1.6) is given in (1.10).


The matrix (1.10) is called the transpose of (1.9) and is denoted by

5
α11 α 21 α 31 
A (α α 
= =ji )  12 α 22 α 32 
T
(1.10)
α13 α 23 α 33 
To be able to represent the transformation (1.4), (1.6) in matrix form, we need to
introduce the concept of matrix multiplication. Let B = ( bij ) and C = ( cij ) be two

3× 3 matrices. The product BC of the two matrices is defined as the 3× 3 matrix

D = ( dij ) , whose elements are given by

dij = bi1c1 j + bi 2 c2 j + bi 3c3 j = bik ckj ( i, j =1, 2,3)


Notice that d ij is the element in the i -th row and j -th column of the matrix D . For

example, if
1 0 2  0 2 1 

B=  1 1 2 
 2 1 −1 and C =
 
 0 3 0   3 1 −1
then
 6 4 −1  4 5 −2 
 −2 4 5  and CB =
BC = 3 7 1 
   
 3 3 6   5 −2 5 
Notice that BC ≠ CB , which shows that matrix multiplication is not commutative.
Also, if X is a matrix consisting only of one column with elements x1 , x2 , x3 , then

1 0 2   x1   x1 + 2 x3 
BX=  2 1 −1  x2 = 2 x + x − x 
 1 2 3
 0 3 0   x3   3 x2 

As long as the row vectors of B and the column vectors of C belong to the
same vector space, the product BC is always defined regardless of the number of
rows in B and the number of columns in C . If B has m rows and C has n
columns, then BC is an m × n matrix.
Matrix multiplication obeys the associative and distributive laws of ordinary
multiplication of numbers provided the products are defined. Thus, if A , B , and C
are 3× 3 matrices, then
( a ) A ( BC ) = ( AB ) C
( b ) A ( B + C ) = AB + AC (1.11)
( c ) ( B + C ) A =BA + CA

6
Let X denote the column matrix whose elements are x1 , x 2 , x3 , and similarly for X .
Then the transformation (1.4) can be written in matrix form
X = AX (1.12)
The inverse transformation (1.6) can be written in the form
X = AT X (1.13)
T
where A is the transpose of A . If we substitute (1.12) for X in (1.13), or if we
substitute (1.13) for X in (1.12), we obtain the result
A=
T
=
A AA T
I (1.14)
where I is the identity matrix
1 0 0 
I = 0 1 0 
0 0 1 
If the inverse of A is denoted by A -1 , then

( )
-1
=A -1 A=
T
and A T A
A matrix whose transpose is its own inverse is called an orthogonal matrix. The
matrix of a transformation that represents pure rotation is an orthogonal matrix.

The rows of the matrix (1.9) are precisely the components of the unit vectors
i i ( i = 1, 2,3) with respect to the base vectors i i ( i = 1, 2,3) , that is,

=i i α=
ij i j ( i 1, 2,3) (1.15)
Likewise, the columns of A are the components of the unit vectors i i ( i = 1, 2,3) with

respect tot the base vectors i i ( i = 1, 2,3) , that is,

= i i α=ji i j ( i 1, 2,3) (1.16)


Notice that equations (1.15) and (1.16) are analogous to equations (1.4) and (1.6)
which define the coordinate transformation.

Finally, since we assume that both coordinate systems are right-handed, it


follows from the definition of scalar triple product of vectors that
i1 ⋅ i 2 × i=
3 = 1
det A
and
i1 ⋅ i 2 =
× i 3 det =
AT 1
If the transformation (1.12) defines a left-handed coordinate system xi , then

i1 ⋅ i 2 × i 3 =det A =−1

7
Theorem 1. Let (1.12) represents a transformation of rectangular Cartesian
coordinates. Then A is an orthogonal matrix such that det A = 1 if the transformation
preserves orientation of the axes (pure rotation), or det A = −1 if the transformation
reverses the orientation (rotation and reflection).

Example 1. The unit base vectors i i of a new coordinate system xi are given by

i 2 + i3 i − i 2 + i3 2i1 + i 2 − i 3
=i1 = , i2 1 = , i3
2 3 6
Find the equations (1.4) and (1.6) of the coordinate transformation.

Solution: In view of the resemblance of (1.4) and (1.15), we immediately deduce


the transformation
x2 + x3 x1 − x2 + x3 2 x1 + x2 − x3
=x1 = , x2 = , x3
2 3 6
Thus the matrix of the transformation is given by
 0 1/ 2 1/ 2 
 
=A  1/ 3 −1/ 3 1/ 3 
 
 2 / 6 1/ 6 −1/ 6 
Since i1 ⋅ i 2 × i=
3 = 1 , the new coordinate system is right-handed. The inverse
det A

transformation is given by
x2 2 x3 x x x x x x
x1 = + , x2 = 1 − 2 + 3 , x3 = 1 + 2 − 3
3 6 2 3 6 2 3 6

8
1.3 Transformation Laws for Vectors

Here, system transformation implies transformation from one rectangular


Cartesian coordinate system into another Cartesian coordinate system. The equations
of coordinate transformations are precisely those derived in Sec. 1.2, namely,
xi = α ij x j or xi = α ji x j
where α ij = i i ⋅ i j (i, j = 1,2,3) satisfy the relation

α ik α jk = α kiα kj = δ ij
Consider a vector A and suppose that its components with respect to the
coordinate systems xi and xi are ai and ai , respectively. Then we have

A = ai i i = a j i j (1.17)
To determine the relationship between the components a j and ai , we take the dot

product of i k with both sides of (1.7) to obtain

aiδ=
ik ( i ⋅i )a
k j j

which leads to
∂xi
ai = a ij a j = aj (1.18)
∂x j
On the other hand, if we take the dot product of i k with both sides of (1.17), we
obtain
∂xi
ai a=
= ji a j aj (1.19)
∂x j
Equation (1.18) or its inverse (1.19) demonstrates how the components of a vector
transform when we change coordinate system.

Definition 1. A vector is a quantity consisting of three components ai (i = 1,2,3)


which transform under a change of coordinates according to the law (1.18) or (1.19),
where ai and ai are the components of the vector with respect to the coordinate

systems xi and xi , respectively.

From Definition 1, vectors are Cartesian tensors of order 1 (or rank 1). A scalar
becomes a quantity that remains invariant under any coordinate transformation. In
tensor language, scalars are tensors of order (or rank) zero.

9
Example 1. Show that the Euclidean distance between two points in space is
invariant (a scalar) with respect to any coordinate transformation.

Solution:
The Euclidean distance between any two points in space with the coordinates xi and

yi is given by

d 2 = ( xi − yi )( xi − yi ) = ( xi − yi )
2
(summation on i )
Under a transformation of coordinates, the new coordinates of the points are given by
xi = α ij x j αnd y i = α ik y k
Hence, in the new coordinate system, we have
d2 =( xi − yi )( xi − yi )
= α ij ( x j − y j ) α ik ( xk − yk )
= α ijα ik ( x j − y j ) ( xk − yk )
Since
 1 when j=k
α ij α ik = δ jk = 
0 when j≠k
it follows that
d 2 = d jk ( x j − y j ) ( xk − yk ) = ( x j − y j )( x j − y j ) = d 2

Example 2. Show that the dot product of two vectors is invariant under any
coordinate transformation.

Solution: Let A = ai i i and B = b j i j be two vectors. Then

A ⋅ B = (ai b j ) i i ⋅ i j = δ ij ai b j = ai bi
For this to be numerically the same in any other coordinate system, we need to show
that ai bi = a j b j , where a j and b j are the components of the vectors in the new

coordinate system xi defined by the transformation xi = α ij x j . Now, according to the

transformation law for vectors, we have


ai = a ip a p , bi = a iq bq
Hence,
ai bi = a ipa iq a p bq = δ pq a p bq = a p b p

10
Example 3. Given the transformation of the coordinates xi = α ij x j , where

 2 /2 2 /2 0 
(α ij ) =  3 / 3 − 3 / 3 3 / 3

− 6 / 6 6 /6 6 / 3

If a vector A has the components [1, 2, −1] with respect to the xi -coordinate system,

find its components in the xi -system.

Solution: According to the transformation law for vectors, we have

2  2 3 2
a1 = a 1 j a j = + 2 =

2  2  2
3  3 3 2 3
a2 = a 2 j a j = − 2 −
 =−
3  3  3 3
6  6 6 6
+ 2
a3 = a 3 j a j = − −
 = −
6  6  3 6
Notice that by definition the elements in the rows of the matrix (α ij ) are the

components of the unit vectors i i (i = 1,2,3) . Thus, the components ai are just the dot

product of the vector A with the base vectors i i (i = 1,2,3) .

Example 4. Let ai and bi be the components of two nonzero vectors A and B ,

respectively, in a rectangular Cartesian coordinate system xi , and set A × B = ci i i ,

where ci = a j bk − a k b j , i, j , k being the cyclic permutations of 1, 2, 3. Let ai , bi and

ci be the respective components of the vectors A , B and A × B in the new

coordinate system xi defined by xi = α ij x j , where

0 0 1
(α ij ) = − 1 0
 0
 0 1 0
Show that ci = −(a j bk − a k b j ), where i, j , k are cyclic permutations of 1, 2, 3.

Solution: Under the given coordinate transformation, the new components of the
vectors A , B and A × B are given by

11
aaa
a1 =
1 ja j =
a3 , a2 =
2 ja j =
−a1 , a3 =
3 ja j =
a2
aaa
b1 =
1 jbj =
b3 , b2 =
2 jbj =
−b1 , b3 =
3 jbj =
b2
c=
1 a1 j c=j c=
3 a1b2 − a2b1 ,
c2 =a 2 j c j =−c1 =− ( a2b3 − a3b2 ) ,
c=
3 a 3 j c=j c=
2 a3b1 − a1b3
Thus we see that
a 2 b3 − a3 b2 = −a1b2 + a 2 b1 = −c1
a3 b1 − a1b3 = a 2 b3 − a3 b2 = −c 2
a1b2 − a 2 b1 = −a3 b1 + a1b3 = −c3
This shows that the transform of the components ci = a j bk − a k b j differ from

a j bk − a k b j by the factor of −1 .

Notice that in Example 4, det (α ij ) = −1 which means that the new coordinate system

xi is left-handed (as we assume that the coordinate system xi is right-handed). This


example illustrates the fact that, in general, the components of the cross product of
two vectors do not transform according to (1.18) or (1.19). Therefore, the cross
product of two vectors is not a vector in the sense of Definition 1. It is called a
pseudovector or an axial vector.

12
1.4 Cartesian Tensors

In three-dimensional space, a tensor is of order or rank r if it has 3 r


components. Thus scalars and vectors are of order 0 and 1 since they have 30 and 31
components, respectively.
Let a particle of mass m be located at the point (x1 , x 2 , x3 ) . The so-called
moment of inertia of the particle is a set of nine numbers defined by
I ij = m(δ ij x k x k − xi x j ) (i, j = 1,2,3) (1.20)
When i = j , this yields

[
I11 = m ( x 2 ) + ( x3 )
2
] 2

= m[( x ) + ( x ) ]
2 2
I 22 3 1

= m[( x ) + ( x ) ]
2 2
I 33 1 2

These are called the moments of inertia of the particle about the x1 -, x 2 -, x3 -axes,
respectively, and are the types of moments of inertia usually studied in a standard
calculus course. When i ≠ j , (1.20) gives
I12 = − mx1 x 2 = I 21 ,
I 23 = − mx2 x3 = I 32 ,
I 31 = −mx3 x1 = I13
These are called the products of inertia.

The moment of inertia of the particle in the new coordinate system xi is given
by
I ij = m(δ ij x k x k − xi x j ) (i, j = 1,2,3) (1.21)
To relate these quantities with I ij (i, j = 1,2,3) , we first observe that under the

coordinate transformation, we have


xk xk = xr xr
(invariance of Euclidean distance), and
δ ij = α ipα jq δ pq
that is, δ ij = δ ij (because the Kronecker delta has the same value in any coordinate

system). Substituting these identities in (1.21), we obtain

13
[
I ij = m α ipα jq δ pq x r x r − α ipα jq x p x q ]
= α ipα jq m(δ pq x r x r − x p x q ) (1.22)
= α ipα jq I pq
Equation (1.22) describes the manner by which the moment of inertia of a
particle transforms under a change of coordinate system. It is a second order tensor
called the moment of inertia tensor, and the quantities I ij are the components of this

tensor.

Definition 2. (Tensor of order Two) A tensor of order two is a quantity consisting of


32 components aij (i, j = 1,2,3) , which transform under a change of coordinate system

xi = α ij x j according to the law

aij = a ipa jq a pq (i, j = 1,2,3) (1.23)


where aij and aij are the components of the tensor with respect to the coordinate

system xi and xi , respectively.


The transformation law (1.23) can be inverted if we multiply both sides of the
equation by α irα js and sum with respect to i and j . We obtain

a rs = a ir a js aij (1.24)
in which α ij α ik = δ jk .

It is convenient that a second order tensor can be written in a matrix form


 a11 a12 a13 
(aij ) = a21 a 22 a 23 
a31 a32 a33 
We set
A = (aij ), A = (aij ), Λ = (a ij )
then the product biq = a ip a pq represents the element in the i -th row and q -th column

of the matrix product ΛA = B , while the product biqa jq = a ipa jq a pq represents the

element in the i -th row and j -th column of the matrix product BΛ T = ΛAΛ T .
Therefore, the transformation law (1.23) can be written in the matrix form
A = ΛAΛ T (1.25)
−1
Since Λ = Λ T , the inverse transformation law (1.24) has the matrix form
A = Λ T AΛ (1.26)
It should be noted that matrix notation fails for tensors of higher order.

14
Example 1. A trivial example of a second order tensor is the Kronecker delta δ ij

which, by definition, is independent of any coordinate system, that is, δ ij = δ ij .

Hence, we can write

δ ij = α ipα jqδ pq
since α ipα jp = δ ij . It is clear that the matrix representation of the Kronecker delta is

the identity matrix, that is, (δ ij ) = I .

Example 2. Let ai and b j be the components of two vectors. The nine quantities

cij = ai b j (i, j = 1,2,3) form the components of a second order tensor, known as the

outer product of the two vectors.

To prove that (cij ) is indeed a second order tensor, we need to show that it

transforms according to the law (1.23). Now under a change of coordinates, we know
that the components ai , b j transform according to the laws:

=ai aa
=
ip a p , bj jq bq

Hence
=cij a=
ib j aaaa
ip jq a =
p bq ip jq c pq

in accordance with (1.23).

Example 3. Given the second order tensor


 0 − 1 3
(aij ) =  1 0 2
− 3 − 2 0
Find the components of this tensor in the coordinate system xi defined by xi = α ij x j ,

where
 0 0 1
(α ij ) = − 1 0 0
 0 1 0

Solution:

15
The new components are given by
=aij aa
=ip jq a pq ( i, j 1, 2,3)
Thus
=a11 aaaa
1=
p 1q a pq =
13 13 a33 0
=a12 aaaa
1=
p 2 q a pq =
13 21a31 3
a13 = aaaa
1 p 3 q a pq = 13 32 a32 = −2

a21 = aaaa
2 p 1q a pq = 21 13 a13 = −3

=a22 aaaa
2=
p 2 q a pq =
21 21a11 0
=a23 aaaa
2=
p 3 q a pq =
21 32 a12 1
=a31 aaaa
3=
p 3 q a pq =
32 13 a23 2
=a32 aaaa
=
3 p 2 q a pq 32 21a21 = −1

=a33 aaaa
=
3 p 3 q a pq =
32 32 a22 0
Hence
 0 3 −2 
( aij ) = −3 0 1 
 2 −1 0 
The result can be checked by using the formula (1.25).

Example 4. Let a body of unit mass be located at a point P : (1, 1, 0 ) . By (1.20)

the components of the moment of inertia tensor are


=I11 1, =I 22 1,=I33 2
I12 =
I 21 =
−1, I 23 =
I32 =
0, I13 =
I31 =
0
or, in matrix form
1 −1 0 
( Iij ) = −1 1 0 
 0 0 2 
Now suppose we rotate the coordinate system about the x3 -axis so that the x1 -axis

passes through the point P . Let us designate this axis as x1 and the others as x2 and

x3 (see Fig. 1.5). Obviously, i1 = ( -i1 + i 2 ) / 2, i 3 =


( i1 + i 2 ) / 2, i 2 = i 3 , so that

the transformation effecting the rotation is defined by the matrix


 2/2 2 / 2 0
(α ij ) = − 2 / 2 2 / 2 0

 0 0 1

(Remember α ij= i i ⋅ i j ) Thus, with respect to the xi -coordinate system, the moment

16
of inertia tensor becomes
0 0 0
( Iij ) = 0 2 0 
0 0 2 
as is readily verified by using (1.25). Notice that in the xi -coordinate system, the

point P has the coordinates ( 2 , 0, 0 .)

Fig. 1.5: Principal axes of a tensor.

Therefore, in the xi -coordinate system all the product inertias are zero, and the

moments of inertia about the x1 -, x2 - and x3 -axes are I11 = 0 , I 22 = 2 and I 33 = 2 ,

respectively. The axes xi with respect to which Iij = 0 for all i ≠ j are called the

principal axes of the tensor.

Example 5. Suppose that a certain quantity has components defined by a11 = x1 x 2 ,

a12 = x 22 , a 21 = x12 , a 22 = x1 x 2 with respect to a rectangular Cartesian coordinate


system x1 , x 2 on a plane. If the quantity were a (plane) tensor of order two, its
components in any other rectangular Cartesian coordinate system xi would be given
by

a12 ( x2=
) , a21 ( x1=
) , a22
2 2
= a11 x= 1 x2 , x1 x2
However, under a rotation of axes,
=x1 x1 cos θ + x2 sin θ
− x1 sin θ + x2 cos θ
x2 =
we see that

17
aa
a11 = x1 x2 cos 2 θ + x22 sin θ cos θ + x12 sin θ cos θ + x1 x2 sin 2 θ
1i 1 j aij =

( )
= x1 x2 + x12 + x22 sin θ cos θ
while, on the other hand,
x1 x2 = ( x1 cos θ + x2 sin θ )( − x1 sin θ + x2 cos θ )
( ) (
= x1 x2 cos 2 θ − sin 2 θ + x22 − x12 sin θ cos θ )
Thus, a11 ≠ x1 x2 , and, therefore, the quantity is not a tensor. In other words, the
quantity given here cannot represent any physical entity.
The definition of a Cartesian tensor of order r (r > 2 ) is a straight-forward
extension of Definition 2.

Definition 3. A Cartesian tensor of order r (r > 2 ) is a quantity consisting of 3r


components ai1 i2ir which transform under a change of coordinates according to the

law
ai1 i2 ... ir = aaa
i1 j1 i2 j2  ir jr a j1 j2  jr (1.27)
( ik , jk 1=
= , 2 , 3 ; k 1, 2 , , r )
where ai1 i2 ... ir and a j1 j2 jr are the components of the tensor with respect to the xi and

xi coordinate systems, respectively.

A tensor of order 3 will have 27 components aijk (i, j , k = 1,2,3) which transform

according to the law


aijk = aaa
ip jq kr a pqr

=
under a change of coordinate ij x j , ( i
xi α= 1, 2,3) .

Example 6. Let ai , bi and ci (i = 1, 2, 3) be the components of three nonzero


vectors. Then the 27 quantities
d ijk = ai b j ck (i, j, k = 1, 2, 3)
form the components of a third order tensor known as the outer product of the vectors.
To see this, we first note that under a change of coordinates, we have
=ai aaa
=
ip a p , bj =
jq bq , ck kr cr

Hence
=
dijk a=
i b j ck aaaaaa
ip jq kr a=
p bq cr ip jq kr d pqr

which shows that the quantities d ijk indeed transform as components of a third order

18
tensor.

19
1.5 Stress Tensor

When a body is under the influence of a system of forces and remains in


equilibrium, there is a tendency for the shape of the body to be distorted or deformed.
A solid body in such a state is said to be under strain. The resulting internal forces in
a body that is in a state of strain give rise to a stress at each point of the body. The
stress at a point is measured as force per unit area and depends not only on the force
but also on the orientation of the surface relative to the force.
Consider a uniform thin cylindrical rod of cross-sectional area a that is being
stretched by a force F (Fig. 1.6). Consider a plane section whose unit normal vector
n makes an angle θ with the axis of the rod. By the area cosine principle, the area of
this plane section is equal to A = a sec θ . The stress vector across the plane section is
equal to
=S F= /A ( F/a ) cos θ (1.28)
It is clear from (1.28) that the stress vector depends on F as well as on the orientation
of the plane, i.e., n or θ . If fact, we see that the magnitude of S is maximum when
θ = 0 , and it is zero when θ = π / 2 .

Fig. 1.6: Stress in a thin rod.

The stress vector S on the plane area can be resolved into two components, one
perpendicular to the plane section (along the unit normal vector n ) denoted by s n
called the tensile or normal stress, while the other parallel to the plane section
(perpendicular to n ) denoted by st called the shearing stress.

Consider a point P in a body under strain in a coordinate system ( x1 , x2 , x3 ) .

An arbitrary element of volume of the body, with P as one of the vertices, has
element edges with lengths ( ∆x1 , ∆x2 , ∆x3 ) as shown in Fig. 1.7. Let Si denote the

stress vector on the face that is perpendicular to the vector i i , and let S′i denote the

stress vector on the opposite face (perpendicular to i i ). Since the body is in

20
equilibrium, the sum of the forces (stress times area) acting on the element must be
zero. Thus we have
( S1 + S1′ ) ∆x2 ∆x3 + ( S 2 + S′2 ) ∆x1∆x3 + ( S3 + S′3 ) ∆x2 ∆x1 =0(1.29)
Since the quantities ∆x1 , ∆x2 , ∆x3 are arbitrary, the equation implies that S1′ = −S1 ,

S′2 = −S 2 , S′3 = −S3 .

Fig. 1.7: Stress tensor.

The stress vectors Si (1 ≤ i ≤ 3) are represented by its components si1 , si 2 , si 3

=
such as ij i j ( i
Si s= 1, 2,3) . The quantities s11 , s22 , s33 are the tensile or normal

stresses while the quantities sij ( i ≠ j ) are called the shearing stresses. At each point,

the stress is represented by the nine quantities sij (i, j = 1,2,3) , which can be written in

the matrix form


 s11 s12 s13 
(sij ) = s21 s22 s 23  (1.30)
 s31 s32 s33 
However, the shearing stresses sij ( i ≠ j ) , are not altogether independent of one

another. In fact, since the element of volume ∆x1 , ∆x2 , ∆x3 is also in equilibrium with
respect to rotation, the sum of the moments of force or torques about any axis must be
zero. So, the net torque about the edge through P parallel to the x3 -axis is

(s12 ∆x2 ∆x3 )∆x1 − (s21∆x1∆x3 )∆x2 = 0


The stresses s11 , s22 , s31 , s32 are balanced by equal and opposite stresses on the

opposite faces of the element, and s13 , s23 , s33 have zero moment arm as they are

directly along the x3 − axis, thus we obtain

s12 = s 21

21
In a similar manner, by considering the net torque about the edges parallel to the x1 -

and x2 -axes, we find

s 23 = s32 , s31 = s13


Therefore, sij = s ji (i, j = 1,2,3) which means that the matrix (1.30) is symmetric.

The quantities sij do transform according to the transformation law

sij = α ipα jq s pq (1.31)


where sij ( i, j = 1, 2,3) represent the stress with respect to a new coordinate system xi .

The components of the stress vector S n across a surface with unit vector

n = α ni i i are given by

sni = Si ⋅ n = sijα nj ( i = 1, 2,3)


so that S n = sni i i . Consider an infinitesimal tetrahedron with P as a vertex and with
the slant face perpendicular to n as shown in Fig. 1.8.

Fig. 1.8: Stress vector across an arbitrary surface area.

Let ∆A denote the area of the slant face. Then, by the area cosine principle, the areas
of the sides with normal vectors i1 , i 2 , and i 3 are given by α n1∆A , α n 2 ∆A and α n 3 ∆A ,
respectively. Since the element is in equilibrium, the force on the slant face must be
equal to the sum of the forces on the three faces of the tetrahedron. Thus, we have
S n ΔA = S1α n1ΔA + S 2α n 2 ΔA + S3α n 3ΔA
Equating the respective components, we obtain
s1 jα n1 + s2 jα n 2 + s3 jα n 3 =
sn j = αn p sp j =
n ⋅S j (j=
1, 2 , 3) (1.32a)
Since sij = s ji , we may write this in matrix form as

22
 s n1   s11 s12 s13  α n1 
    
 s n 2  =  s 21 s 22 s 23  α n 2  (1.32b)
 s n 3   s31 s32 s33  α n 3 
Let i i be the base vectors of a new coordinate system xi and let Si denotes the

stress vector across a surface that is perpendicular to i i , i = 1,, 2,3 . Then, by (1.32a),

we have
Si = α ip s pq i q
If sij denotes the components of Si with respect to the new coordinate system xi ,

then, Si = sij i j . Thus, we have

sij i j = α ip s pq i q (1.33)
We solve for sij by taking the dot product of both sides of (1.33) with i k , noting that

δ jk . Since α jq= i j ⋅ i q , we finally obtain


i j ⋅ ik =

sij = α ipα jq s pq (1.34)


Thus the quantities sij transform as components of a second order tensor. This tensor

is known as the stress tensor.

Example 1. The stress tensor at a point P is given by


 2 −1 1 
(sij ) = − 1 2 − 1
 1 − 1 2 
What is the stress vector on a surface normal to the unit vector
2 1 2
n=i1 + i 2 − i 3
3 3 3
at the point? What is the normal stress and the shearing stress on such a surface?

Solution:
According to (1.32a), the components of the stress vector S n on the surface normal to
the vector n are given by

23
2  1   −2  1
sn1 = S1 ⋅ n = 2   + ( −1)   +   =
3 3  3  3
2 1  −2  2
sn 2 = S 2 ⋅ n = ( −1)   + ( 2 )   + ( −1)   =
3 3  3  3
2 1  −2 
sn 3 =S3 ⋅ n =  + ( −1)   + ( 2 )   =−1
3 3  3 
The normal stress, that is, the stress along the direction of n , is given by
2 1 2 10
sn = S n ⋅ n = sn1 + sn 2 − sn 3 =
3 3 3 9
Thus the shearing stress is

14 100 26
st = S n − sn2 = − =
2

9 81 9

Example 2. For stress on a plane, the components of a stress tensor are given by

(s ) = ss
ij
11 s12 
s22 
 21
where s12 = s21 . Then, by (1.32a), the components sn1 and sn 2 of the stress vector

to n cos θ i1 + sin θ i 2 are given by


across a plane normal=

sn1 = s11 cosθ + s21 sin θ


sn 2 = s12 cosθ + s22 sin θ
Thus the normal stress on such a plane is equal to
sn = S n ⋅ n = sn1 cos θ + sn 2 sin θ
= s11 cos 2 θ + 2 s12 sin θ cos θ + s22 sin 2 θ
and the shearing stress is given by

= S n − sn2
2
st
For example, if
− 1
(s ) = −21
2 
ij

then the components of the stress across a plane normal to the unit vector

3 1
=n i1 + i 2 are
2 2
 3 1 1
s1= 2   + ( −1)  = 3−
 2  2 2
3 1 3
s2 = − + ( 2)   =− +1
2 2 2
Thus the normal stress across such a plane is

24
 1  3   3  1  3
sn =  3 −    +  − + 1   = 2 −
 2  2   2  2  2
and the shearing stress is given by

( s1 ) + ( s2 ) − ( sn )
2 2 2
st =
2 2
1  3  3
2
 1
=  3 −  + 1 −  −  2 − =
 2  2   2  2

Example 3. Consider a stress tensor given by


0 1 1 
(sij ) = 1 2 1
1 1 0
If we change the coordinate system through the transformation xi = α ij x j , where

 2/2 0 − 2 / 2

(α ij ) =  3 / 3 − 3 / 3 3 / 3 
 6 /6 6 /3 6 / 6 

then in the xi -coordinate system the stress tensor becomes

 −1 0 0 
( sij ) =  0 0 0
 0 0 3
This can, of course, be verified by using the transformation law
sij = α ipα jq s pq
For instance, we find
1 1
s11 =α11α13 s13 + α13α11s31 =− − =−1
2 2
Since α12 = 0 , s11 = 0 and s33 = 0 . Thus in the xi coordinate system, the stress
consists only of normal stresses as all shearing stresses are zero.

25
1.6 Algebra of Cartesian Tensors

Definition 1. The sum of two tensors of order r is a tensor of the same order whose
components consist of the sum of the corresponding components of the two tensors.

For example, if aij and bkm are the components of two second order tensors, then

their sum is a tensor of second order whose components are given by aij + bij . It is

easy to see that cij = aij + bij transform according to the law

cij = α ipα jq c pq
where c=
ij aij + bij . In fact, since aij and bij are the components of tensors, we know

that
=aij aaaa
=
ip jq a pq , bij ip jq b pq

Hence

ip jq ( a pq + b pq ) =
cij = aij + bij = aaaa ip jq c pq

Definition 2 (Outer Product). The outer product (or simply product) of two
tensors of order r and s is a tensor of order r + s whose components consist of the
products of the various components of the tensors.

For example, let aij and bkmn be the components of a second order and a third

order tensor. Then the outer product of these two tensors is a tensor of order five
whose components consist of cijkmn = aij bkmn (i, j , k , m, n = 1,2,3) . The tensorial

character of this outer product follows from those of aij and bkmn . In fact, since

aij = aa
ip jq a pq and bkmn = α krα msα nt brst , we have

=
cijkmn a=
ij bkmn aaaaa
ip jq a pq kr ms nt brst

= aaaaa
ip jq kr ms nt a pq brst

= aaaaa
ip jq kr ms nt c pqrst

Definition 3 (Contraction). The operation of setting two of the indices in the


components of a tensor of order r ≥ 2 equal and then summing with respect to the
repeated index is called contraction.

26
For example, consider the components aij of a second order tensor. The

contraction of this tensor yields the scalar


aii = a11 + a22 + a33
which is just the trace of the matrix (aij ) . For a third order tensor with components

(a ), there are three possible contractions, namely,


ijk

aiik = a11k + a22 k + a33k (k = 1, 2, 3)


aiji = a1 j1 + a2 j 2 + a3 j 3 ( j = 1, 2, 3)
aijj = ai11 + ai 22 + ai 33 (i = 1, 2, 3)
It can be shown that each of these contractions results in a vector. In fact, the
contraction of a tensor reduces the order of the tensor by 2.

Theorem 1. (Reduction of order by contraction) The contraction of tensor of order


r ≥ 2 results in a tensor of order r − 2 .

For example, consider a third order tensor aijk , where the contraction is with

respect to the indices i and j . First we note that under a coordinate transformation,
we have
aijk = aaa
ip jq kr a pqr

Setting i = j (summation is implied), we find


aiik = aaa
ip iq kr a pqr

= δ pqa kr a pqr
= a kr a ppr
since α ipα iq = δ pq . Hence if we set ck = aiik (sum on i ) and cr = a ppr (sum on p ),

then we have
ck = α kr cr
which shows the tensor character of c r of order one (a vector).

Definition 4 (Inner Product). An inner product of two tensors is a contraction


of the outer product with respect to two indices, each belonging to a component of the
tensors.

27
For example, the components of the outer product of two vectors A = ai i i and B = bi i i

are given by cij = ai b j . By contraction, we obtain the inner product

cii = aibi = a1b1 + a2b2 + a3b3


For two second order tensors with components aij and bkm , there are four possible

inner products, namely,


=cijim a=
ij bim , cijki aij bki ( sum on i )
= cijjm a=ij b jm , cijkj aij bkj ( sum on j )
By Theorem 1, each of these inner products gives rise to a second order tensor.

Quotient Rule
Consider a set of 3r quantities
X i1 i2  ir (i1, i2 ,  ,ir = 1, 2 , 3) (1.35)
the quotient rule enables us to determine whether or not the given quantities do form a
tensor without actually going to the trouble of checking how they transform under a
change of coordinates.

Theorem 2. If the outer or inner product of the set of quantities (1.35) and an
arbitrary tensor of any order yields a tensor of appropriate order, then the quantities
are components of a tensor of order r .

Let the given quantities by X ij and suppose that the outer product X ij a k with an

arbitrary vector a k is a tensor of order 3. Set bijk = X ij a k and let X ij represent the

quantities in a new coordinate system. Since bijk are the components of a third order

tensor, we have
=
bijk X=
ij ak aaa
ip jq kr b pqr

= aaa
ip jq kr X pq ar

= aaa
ip jq X pq kr ar

= aa
ip jq X pq ak

which implies
X ij = α ipα jq X pq
Thus the quantities X ij are the components of a second order tensor.

28
Example 1. Let the components of a second order tensor be given by
2 − 1 0 
(tij ) = 1 2 2
0 − 3 1
and let the components ai of a vector be 1, -2, 2. Find the inner product tij a j and aitij .

Solution:
Set bi = tij a j . Then we have

b1 = t1 j a j = 2(1) + (− 1)(− 2 ) + 0(2 ) = 4


b2 = t2 j a j = 1(1) + (2 )(− 2 ) + 2(2 ) = 1
b3 = t3 j a j = 0(1) + (− 3)(− 2 ) + 1(2 ) = 8
Next, let c j = aitij . Then

c1 = aiti1 = 1(2 ) + (− 2 )(1) + 2(0 ) = 0


c2 = aiti 2 = 1(− 1) + (− 2 )(2 ) + 2(− 3) = −11
c3 = aiti 3 = 1(0 ) + (− 2 )(2 ) + 2(1) = −2

Example 2. Let sij denotes the components of a stress tensor at a point. Then the

stress vector S n across an area with normal vector n = ni i i is the inner product of the

stress tensor and the vector n , that is, =


snj s=
ij ni s ji ni , since sij = s ji .

Symmetric and Anti-Symmetric tensors


A second order tensor whose components satisfy the condition aij = a ji is called

a symmetric tensor while that satisfies aij = −a ji is called an anti-symmetric tensor. It

is clear that for an anti-symmetric tensor, aii = 0 (no sum on i ) for i = 1,2,3 . Thus, a
second order anti-symmetric tensor has the matrix representation
 0 a12 a13 
− a 0 a23 
 12
 − a13 − a23 0 
Such a matrix is called a skew-symmetric matrix.
The definition of symmetric or anti-symmetric tensor can be extended as well to
tensors of higher order. For example, a third order tensor with components aijk is said

to be symmetric or anti-symmetric with respect to the first two indices according as

29
aijk = a jik or aijk = −a jik . It is said to be symmetric or anti-symmetric with respect to

the last two indices according as aijk = aikj or aijk = −aikj

If a tensor is symmetric or anti-symmetric in one coordinate system, it remains


symmetric or anti-symmetric in all other coordinate systems. For example, suppose
aij = a ji in a coordinate system xi . Under a transformation of coordinates xi = α ij x j ,

we see that
=aij aaaaaa
=
ip jq a pq =
ip jq aqp =
jq ip aqp a ji
Thus the tensor is also symmetric in the xi coordinate system.
Every tensor of order r ≥ 2 can be expressed as a sum of a symmetric and an
anti-symmetric tensors of the same order. For example, for a third order tensor with
components aijk , we set

aijk + a jik aijk − a jik


sijk = , tijk =
2 2
Then, clearly, aijk = sijk + tijk and sijk = s jik , t ijk = −t jik , so that sijk and tijk are

components of a symmetric and an anti-symmetric tensors of third order, respectively.

30
1.7 Principal Axes of Second Order Tensors

Let t ij (i, j = 1,2,3) denote the components of a second order tensor T . The

tij 0 ( i ≠ j ) are called the principal axes of


coordinates axes with respect to which =

tensor T . Let ai (i = 1,2,3) denote the components of a vector A . The inner product

of T and A (with respect to the second index of t ij ) is a vector with components

=
given by a j ( i 1, 2,3) . It turns out that the principal axes of the tensor T are
bi tij=

determined by those vectors A whose inner product with the tensor are parallel to the
vectors themselves, that is,
tij a j = λai (i = 1, 2, 3) (1.36)
where λ is a scalar. Such vectors (nonzero), if they exist, are called the eigenvectors
of the tensor, and their directions are called the principal directions. The principal
axes of the tensor are simply the axes determined by the principal directions. The
values of the scalar λ that satisfies (1.36) are called the eigenvalues of the tensor.
The eigenvalues are precisely the components t 11 , t 22 , t 33 of the tensor with respect to

the principal axes.


To determine the principal axes of the tensor T , we must solve the system of
equations (1.36) for the eigenvectors corresponding to each eigenvalue. The system
of equations defined by (1.36) consists of
( t11 − λ ) a1 + t12 a2 + t13a3 =
0
t21a1 + ( t22 − λ ) a2 + t23 a3 =0 (1.37)
t31a1 + t32 a2 + ( t33 − λ ) a3 =0
for the components a1 , a 2 , a3 of an eigenvector. For nonzero vectors, the determinant
of the coefficients of the system (1.37) must vanish, that is
t11 − λ t12 t13
∆(λ ) = t 21 t 22 − λ t 23 = 0 (1.38)
t 31 t 32 t 33 − λ
This determinant leads to a cubic equation in λ , known as the characteristic equation
of the tensor. Thus the eigenvalues of the tensor are simply the zeros or roots of the
characteristic equation. For each eigenvalue, the system (1.37) can be solved for the
components a1 , a 2 , a3 of the corresponding nonzero eigenvector. Clearly, if A is an

eigenvector, so is cA for any constant c . Thus an eigenvector of a tensor is unique

31
up to a constant factor. In linear algebra, such problem is known as an eigenvalue
problem.

Eigenvalue Problems for Symmetric Tensors


For a symmetric tensor, the eigenvalues are all real. Suppose λ is an
eigenvalue which might be complex, and let ai (1 ≤ i ≤ 3) denotes the components of
the corresponding eigenvector A , then
tij a j = λai (i = 1, 2, 3)
Multiplying this equation by the conjugate ai* of ai and summing with respect to i ,
we obtain
ai*tij a j = λai*ai (1.39)
If we take the complex conjugate of both sides of this equation, noting that the
quantities tij are real, we find

ai tij a*j = λ*ai ai* (1.40)


Since the tensor is symmetric, we can show that the left-hand sides of (1.39) and (1.40)
are identical. Indeed, since tij = t ji , we have

ai*tij a j = a*j t ji ai = ait ji a*j = aitij a*j


Therefore,
λ ai*ai λ *ai ai*
= or ( λ − λ=
)a a
* *
i i 0
Since ai* ai > 0 , this implies λ − λ* = 0 , which means that λ is a real number.
The eigenvectors of a symmetric tensor corresponding to distinct eigenvalues
are orthogonal. This leads the principal axes of a symmetric tensor a natural axes for
a new rectangular Cartesian coordinate system. To prove the orthogonality property,
let ai and bi denote the components of two eigenvectors A and B corresponding to

the two distinct eigenvalues λ and µ , respectively. Then


t ij a j = λai , t ij b j = µ bi
If we multiply the first equation by bi , the second equation by ai , and then sum with

respect to i , we obtain
bi t ij a j = λai bi , ai t ij b j = µ ai bi
Since t ij = t ji , by rearranging the factors and changing indices, we find

λai bi = bi t ij a j = a j t ij bi = a j t ji bi = ai t ij b j = µ ai bi
therefore,

32
( λ − µ ) aibi =
0
Since λ ≠ µ , we conclude that ai bi = A ⋅ B = 0 , which says that the eigenvectors are
orthogonal.
With symmetric tensors, even when the eigenvalues are not all distinct or there
are repeated eigenvalues, we can still determine three eigenvectors that are mutually
orthogonal.
When we introduce a new coordinate system xi with axes along the

eigenvectors (ordered so as to be right-handed), the new components tij of the tensor

satisfy the relation tij = 0 for i ≠ j with t11 , t22 , t33 assuming the values of the three

eigenvalues λ1 , λ2 , λ3 . Let nk( i ) ( k = 1, 2,3) denote the components of the normalized

=
eigenvector i i A=
i Ai ( i 1, 2,3) that corresponds to the eigenvalues

λi ( i = 1, 2,3) , that is i i = nk( i ) i k . Then the transformation from the xi coordinate

system to the new xi -coordinate system with base vectors i i is given by

xi = α ij x j , where α ij = i i ⋅ i j = n(ji )
Thus the row elements in the transformation matrix Λ = (α ij ) are the components of

the normalized eigenvectors. With respect to the new coordinate system xi , the

components of the tensor tij assume the following values:

= t11 λ=
1, t22 λ2 ,= t33 λ3=
, tij 0 for i ≠ j (1.41)
By transformation law, we have
=tij α=
ipα jq t pq n(pi ) nq( j )t pq (1.42)
From (1.36)
t pq nq( j ) = λ j n(pj ) (no sum on j )
so that (1.42) becomes
(i ) ( j ) (i ) ( j )
= tij n=p nq t pq λ=j np np λ jδ ij
This yields the relations given in (1.41).

Example 1. Consider the symmetric tensor


1 1 0
(tij ) = 1 2 1
0 1 1
The characteristic equation is given by

33
1− λ 1 0
1 2−λ 1 = λ (1 − λ )(λ − 3) = 0
0 1 1− λ
and so the eigenvalues are λ = 0, 1, 3 . For λ = 0 , the system of equation (1.37)
becomes
a1 + a2 = 0, a1 + 2a2 + a3 = 0, a2 + a3 = 0
form which we find
a1 = −a2 , a3 = −a2
Choosing a2 = −1 , we obtain the eigenvector A1 = i1 − i 2 + i 3 .

Next, when λ = 1 , the system of equation (1.37) reduces to


a2 = 0, a1 + a2 + a3 = 0
By choosing a1 = 1 , we obtain the corresponding eigenvector A 2= i1 − i 3 . Finally,

when λ = 3 , the system of equations becomes


− 2a1 + a2 = 0, a1 − a2 + a3 = 0, a2 − 2a3 = 0
from which we find
a2 = 2a1 , a1 = a3
Setting a1 = 1 , we obtain the corresponding eigenvector A 3 =i1 + 2i 2 + i 3 .
It is easily verified that the three eigenvectors are mutually orthogonal.
Moreover, since the scalar triple product ( A1 A 2 A 3 ) is positive, it follows that the

eigenvectors form a right-handed triple. (Note that in case the eigenvectors do not
form a right-handed triple, we need only change the order of any two of them and re-
label the vectors accordingly.) Thus the principal axes of the tensor are along the
direction of the eigenvectors. The transformation from the rectangular cartesian
coordinate xi to the new coordinate system xi determined by the principal axes is

defined by xi = α ij x j , where α ij= i i ⋅ i j . Thus, taking the dot product of the base

vectors, the transformation is represented by the matrix


 1 1 1 
 3 − 3 3 

= (αij )  12 0 − 12 
 
 1 2 1 
 6 6 
 6
Now, with respect to the principal axes, the components of the given tensor are
represented by the matrix

34
0 0 0 
( tij ) = 0 1 0
0 0 3
It is possible to verify the element tij either from the transformation law tij = α ipα jq t pq

or form the matrix product

( t ) = (α )( t )(α )
T
ij ip pq jq

Example 2. Consider the symmetric tensor


3 0 0
(tij ) = 0 4 3 
0 3 6 
Its characteristic equation is given by
3−λ 0 0
0 4−λ 3 = ( 3 − λ )( λ − 7 )( λ − 3) = 0
0 3 6−λ
and so the eigenvalues are λ = 3, 3, 7 with λ = 3 being a root of multiplicity 2.
Corresponding to λ = 3 , the system of equation (1.37) gives

a2 + 3a3 = 0, 3a2 + 3a3 = 0


which are identical. Hence a2 = − 3a3 , so that a1 and a3 are arbitrary constants. By
choosing these constants judiciously, we can obtain two orthogonal eigenvectors
corresponding to the eigenvalue λ = 3 . In fact, if we choose a1 = 1 , a2 = 0 , a3 = 0 ,

we obtain A1 = i1 . By choosing a1 = 0 , a2 = 3 , a3 = −1 , we obtain a second

eigenvector =
A2 3i 2 − i 3 , which is orthogonal to A1 .
Finally, when λ = 7 , we solve the system of equations

− 4a1 = 0, − 3a2 + 3a3 = 0, 3a2 − a3 = 0


for which a solution is given by a1 = 0 , a2 = 1 , a3 = 3 . Thus A 3= i 2 + 3i 3 is an

eigenvector corresponding to λ = 7 . Clearly this vector is orthogonal to both A1 and

A 2 . These three eigenvectors are the principal axes of the given tensor. With respect
to these principal axes, the tensor assumes the values
= t22
t11 = 3, = 7,
t33 t=
ij 0 for i ≠ j
These values can be readily verified from the transformation law tij = α ipα jq t pq , where

35
α ij= i i ⋅ i j and i i = A i A i . In fact, we find

1 0 0 
(α ij ) = 0 3/2 − 1 / 2 
0 1/ 2 3 / 2

36
1.8 Differentiation of Cartesian Tensor Fields

A tensor field of order two is characterized by a set of tensor components


aij ( x1 , x2 , x3 ) or briefly aij ( xk ) , which depend on the coordinates xi (1 ≤ i ≤ 3) .

Sometimes the components of a tensor field may also depend on a parameter that is
independent of the space coordinates xi . For example, the components may depend
on a parameter t which represents time, thus leading to what is called an unsteady
tensor field.
A tensor field of zero order is simply a scalar field. If φ ( xk ) denotes a scalar

field, then under a transformation of coordinates xi = α ij x j (or x j = α ij xi ), we have

φ ( x j ) φ=
= (αij xi ) φ ( xi ) (1.43)
On the other hand, a tensor field of order one is simply a vector field. If ai (x p )

denotes the components of a vector field, then under the transformation, we have

ip a p ( xq )
aaa
= ip a p ( jq x j )
= ai ( xi ) (1.44)
Transformation laws for tensors of higher order can be written in a similar manner.

Tensor Gradient
Assume that the components of a tensor field can be differentiated as often as needed
and that the derivatives are continuous in the domain of definition. To examine the
transformation of partial derivatives ∂φ ( xi ) / ∂x j of a scalar field under a change of

coordinates, by chain rule, we find


∂φ ∂φ ∂x j ∂φ
= = α ij (1.45)
∂xi ∂x j ∂xi ∂x j
α ij , in view of the inverse transformation x j = α ij xi .
since ∂x j / ∂xi = Hence, if

∂φ ( x j ) ∂φ (x j )
ai = and ai = , (1.45) becomes
∂xi ∂xi

ai = a ij a j (1.46)
Since i j = α ij i i , it follows from (1.45) that

∂φ ∂φ ∂φ
= i i α= ii ij (1.47)
∂xi ∂x j ∂x j
ij

This shows that the representation of grad φ ( x j ) is invariant with respect to

37
transformation of coordinates.
The components of a vector field transform according to (1.44). By the chain
rule, we find
∂ai ∂a p ∂xq ∂a p
= aaa
= (1.48)
∂x j ∂xq ∂x j ∂xq
ip ip jq

∂ai ∂a p
Setting tij = and t pq = , we conclude from (1.48) that the partial derivatives
∂x j ∂xq

t ij form components of a second order tensor field. Similarly, if aij ( xk ) denote the

∂aij
components of a second order tensor field, the partial derivatives = t ijk become
∂xk
the components of a third order tensor field. In general, it can be shown by induction
that the partial derivatives of a tensor field of order r results in a tensor field of order
r + 1 . We call the partial derivatives of a tensor field of arbitrary order the tensor
gradient. Thus the tensor gradient of a vector field is a second order tensor.

Theorem 1. The tensor gradient of a Cartesian tensor field of order r is a tensor


field of order r + 1 .

Theorem 1 is true only for Cartesian tensor fields because for such tensors all
the admissible coordinate transformations are of the form xi = α ij x j , where the

coefficient matrix (α ij ) is an orthogonal constant matrix. The transformation

equations are linear and they represent rotations of the coordinate axes.

Divergence of a Tensor Field


Consider the components of the tensor gradient of a vector field ∂ai / ∂x j . By

contraction we obtain
∂ai ∂a1 ∂a2 ∂a3
= + +
∂xi ∂x1 ∂x2 ∂x3
which is precisely the divergence of the vector field. Further, from the transformation
formula (1.48), we find
∂ai ∂a p ∂a p ∂a p
= aaip = δ= (1.49)
∂xi ∂xq ∂xq ∂x p
iq pq

Equation (1.49) shows that the divergence of a vector field is invariant under

38
transformation of coordinates or, equivalently, that the divergence of a vector field is a
scalar.
Likewise, contraction with respect to the indices j, k in the tensor gradient of a
second order tensor ∂aij / ∂xk leads to the components bi = ∂aij / ∂x j of a vector field.

In fact, from the transformation law,


∂aij ∂a pq
= aaa
∂xk ∂xr
ip jq kr

we obtain
∂aij ∂a pq ∂a pq
=
bi = a ipδ qr = aa = ip bp
∂x j ∂xr ∂xq
ip

which shows that the quantities bi indeed form components of a vector field.

Similarly, the contraction with respect to the indices i , k yields components of a


∂aij ∂aij ∂aij
vector field b j = . The quantities and are called the components of the
∂xi ∂xi ∂x j

divergence of the tensor field aij with respect to the indices i and j , respectively. In

the general case, we define the divergence of a tensor field with respect to any one of
its indices as the contraction of the corresponding tensor gradient with respect to that
index. It follows from Theorem 1 above and Theorem 1 of Sec. 1.6 that the
divergence of a tensor field of order r yields a tensor field of order r − 1 .

Example 1. Let a1 = x2 x3 , a2 = x12 x3 , a3 = x2 x32 be the components of a vector field


in a domain D . Then the components of the gradient of this vector field are given by
 ∂a1 ∂a1 ∂a1 
 
 ∂x1 ∂x2 ∂x3   0 x3 x2 
 ∂ai   ∂a2 ∂a2 ∂a2  
 = = 2 x1 x3 0 x12 
 ∂x   ∂x
   1 ∂x2 ∂x3  
j
∂a ∂a3

∂a3   0 x32 2 x2 x3 
 3 
 ∂x1 ∂x2 ∂x3 
The divergence is given by
∂ai
= 2 x2 x3
∂xi

Example 2. Let the components of a second order tensor field be given by

39
 0 x1 x2 x32 
(aij ) =  x12 x2 x3 0 
 x3 x1
 0 x3 x2 
Then the components of the divergence with respect to the index i are given by the
formula
∂aij ∂a1 j ∂a2 j ∂a3 j
bj = = + + ( j = 1, 2, 3)
∂xi ∂x1 ∂x2 ∂x3
which yields
b1 = x1 , b2 = x2 + x3 , b3 = x2
These are the components of a vector field.
The divergence with respect to the second index j is given by
∂aij ∂ai1 ∂ai 2 ∂ai 3
bi = = + + (i = 1, 2, 3)
∂x j ∂x1 ∂x2 ∂x3
which gives
b1 = x1 + 2 x3 , b2 = 2 x1 + x3 , b3 =x2 + x3
Notice that the two divergences are not the same.

40
1.9 Strain Tensors
When a solid body is acted on by external forces which are in equilibrium, its
configuration exhibits certain deformation. The deformation is usually described in
terms of the change in the relative distance between two infinitely close points before
and after deformation. This give rise to what is called a strain tensor. Suppose P and
Q are two points which are very close to each other in a body subjected to external
forces in equilibrium. Let the position vectors of these points with respect to a
rectangular Cartesian coordinate system be denoted by R and R + ∆R (Fig. 1.9)
before deformation.

Fig. 1.9: Strain tensor.

Suppose that after deformation the points P and Q have moved to points P′

and Q′ , respectively. Let U ( R ) denote the displacement of P to P′ and

U ( R + ∆R ) denote the displacement of Q to Q′ . The displacement vector U

depends on the position of the point since different points in the body will in general
experience different displacements. Then the position vector of P′ and Q′ are given
by
OP′= R + U ( R ) and OQ′= ( R + ∆R ) + U ( R + ∆R ) (1.50)
so that the relative position vector P′ and Q′ is

∆R=′ OQ′-OP=′ ( R + ∆R ) + U ( R + ∆R ) - ( R + U ( R ) )
(1.51)
= ∆R + U ( R + ∆R ) - U ( R )
Thus the change in the relative position between P , Q and P′ , Q′ is represented by

41
∆= R U ( R + ∆R ) − U ( R )
U ΔR′ − ∆= (1.52)
Let R = xi i i , ∆R =∆xi i i , ∆R′ =∆xi′i i , and U ( R ) = ui ( x j ) i i , where ui are

continuously differentiable. Let us consider the i -th component of (1.52),


∆ui = ∆xi′ − ∆xi = ui (x j + ∆x j ) − ui (x j ) .

Since the points P , Q are sufficiently close, the quantities ∆xi are very small. Hence,
by the mean value theorem of differential calculus, we can write
∂ui
ui ui ( x j + ∆x j ) − ui ( x=
∆= j)∆x j ( sum on j ) (1.53)
∂x j
Thus the components of the change in the relative position vectors of the points before
and after deformation are linear combinations of the ∆x j ( j = 1,2,3) . In differential

form, this can be written as


 ∂u1 ∂u1 ∂u1 
 
∂x ∂x2 ∂x3 
 du1   1  dx1 
 ∂u ∂u2 ∂u2   
= dU = du2   2  = dx2 (T ) dR
∂x ∂x2 ∂x3   
ij

 du3   1  dx3 
 ∂u3 ∂u3 ∂u3 
 
 ∂x1 ∂x2 ∂x3 
The quantities
∂ui
Tij = (i, j = 1, 2, 3) (1.54)
∂x j
are components of the tensor gradient of the displacement vector U . By Theorem 1,
this tensor gradient is a second order tensor field and it is called the displacement
tensor. The symmetric part of the displacement tensor corresponds to the strain tensor.
The strain at the point P in the direction of ∆R is defined as the ratio
∆R′ − ∆R
S= (1.55)
∆R
Assume that ∆U is much smaller than either ∆R or ∆R′ , then the change in the

distance ∆R′ − ∆R may be approximated by the projection of ∆U on ∆R (Fig.

1.10) , that is
∆R
∆R′ − ∆R = ∆U ⋅
∆R

42
Fig. 1.10: Projection of ∆U on ∆R .
Hence (1.55) can be written as
∆U ⋅ ∆R 1
=
S = ∆ui ∆xi
∆R ∆R
2 2

In view of (1.53), this becomes


1 1 ∂ui ∂ui
=
S ∆ui ∆
=xi ∆xi ∆=
xj ni n j (1.56)
∆R ∆R ∂x j ∂x j
2 2

∆xi
=
where ni = ( i 1, 2,3) are the direction cosines of ∆R . Thus the strain is a
∆R

quadratic function of the direction cosines of ∆R with coefficients which are the
components of the displacement tensor (1.54).
In matrix form, the strain (1.56) can be written as
 S11 S12 S13   n1 
S = [n1 , n2 , n3 ] S 21 S 22 S 23  n2 
 S31 S32 S33   n3 
where

1  ∂ui ∂u j 
Sij = +  = S ji ( i, j = 1, 2 , 3) (1.57)
2  ∂x j ∂xi 
The quantities Sij are the components of a second order tensor field called the strain

tensor. It is clear that (S ij ) is just the symmetric part of the displacement tensor when

(1.54) is written as
(T ) = (S ) + (K )
ij ij ij

where the components of the skew-symmetric part (K ij ) are given by

1  ∂ui ∂u j 
K ij =  −  ( i, j =
1, 2 , 3) (1.58)
2  ∂x j ∂xi 
The skew-symmetric tensor field (K ij ) contributes to the deformation of the

body in a different way. A vector ω = ωi i i with the three independent components

K12 , K 23 , K 31 of the skew-symmetric tensor by defining

43
1  ∂u ∂u 
ωk =  j − i  = K ji (1.59)
2  ∂xi ∂x j 
where i, j , k are cyclic permutations of 1,2,3. Then, from (1.53), we have

∆u i = S ij ∆x j + K ij ∆x j = S ij ∆x j − ω k ∆x j (1.60)
Now the term − ωk ∆x j in (1.60) is precisely the i -th component of the cross product

w × ∆R . This product represents the displacement due to an infinitesimal rotation


represented by w (Fig. 1.11). The term K ij ∆x j in (1.60) corresponds to a

deformation due to an infinitesimal rigid-body rotation about an axis passing through


P and parallel to w .

Fig. 1.11: Deformation due to an infinitesimal rotation.

∂u i
If the displacement tensor is zero, that is, = 0 for i, j = 1,2,3 , then (1.52)
∂x j

implies ∆R = ∆R ′ , which means that the deformation consists only of translation.


Geometrically, this means that the configuration PP′Q′Q is a parallelogram. On the

other hand, if the strain tensor (S ij ) is zero, then from (1.60) we obtain

∆xi′ = ∆xi − ωk ∆x j
which shows that the deformation consists of a translation and rotation.
It is interesting to note by definition (1.59), we have
ω ω=
= i ii (1/ 2 ) curl U
where U is the displacement vector of the point P . Hence, if curl U = 0 , then there
is no rotation and so the deformation is purely due to translation and strain.

44

Das könnte Ihnen auch gefallen