Sie sind auf Seite 1von 31

Advanced

StructuralAnalysis
DevdasMenon
Professor
IITMadras
(dmenon@iitm.ac.in)
NationalProgrammeonTechnology
EnhancedLearning(NPTEL)
www.nptel.ac.in
Lecture17 Module3:
BasicMatrixConcepts
AdvancedStructuralAnalysis
Modules
1. Reviewofbasicstructuralanalysis 1(6lectures)
2. Reviewofbasicstructuralanalysis 2(10lectures)
3. Basicmatrixconcepts
4. Matrixanalysisofstructureswithaxialelements
5. Matrixanalysisofbeamsandgrids
6. Matrixanalysisofplaneandspaceframes
7. Analysisofelasticinstabilityandsecondordereffects

Module3:BasicMatrixConcepts
Reviewofmatrixalgebra
Introductiontomatrixstructuralanalysis(forceand
displacementtransformations;stiffnessand
flexibilitymatrices;basicformulations;equivalent
jointloads).

Reviewof
Basic
Conceptsin
Structural
Analysis
Matrix
Concepts
and
Methods
Structures
withAxial
Elements
Beams
andGrids
Planeand
Space
Frames
Elastic
Instability
andSecond
order
Analysis A
d
v
a
n
c
e
d

S
t
r
u
c
t
u
r
a
l

A
n
a
l
y
s
i
s

1INTRODUCTION
2MATRIX
3VECTOR
4ELEMENTARYMATRIXOPERATIONS
5MATRIXMUTLITPLICATION
6TRANSPOSEOFAMATRIX
MatrixConceptsandAlgebra
7RANKOFAMATRIX
8LINEARSIMULTANEOUSEQUATIONS
9MATRIXINVERSION
10EIGENVALUESANDEIGENVECTORS
MatricesinStructuralAnalysis
LOADS
(input)
STRUCTURE
(system)
RESPONSE?
(output)
( ( (
=
( ( (
( ( (

A AA AR A
R RA RR R
F k k D
F k k D
LoadVectorF
A
SupportDisplacementsD
R
InitialDeformationsD
*in
DisplacementVectorD
A
SupportReactionsD
R
InternalForcesF
*
MemberDeformationsD
*
Analysissoftwarepackages
Introduction
Thereisanincreasingtendencyamongmodernstructural
engineerstoleanheavilyonsoftwarepackagesfor
everything.
Thisinducesafalsesenseofknowledge,securityandpower.
Thecomputerisindeedapowerfultoolandanassetforany
structuralengineer.Itisdangerous,however,tomakethe
toolonesmaster,andtomakeitaconvenientsubstitutefor
humanknowledge,experienceandcreativethinking.
Ref.: Preface to Advanced Structural Analysis
By definition,amatrix isrectangulararrayofelementsarrangedinhorizontal
rowsandverticalcolumns.
Theentriesofamatrix,calledelements,arescalarquantities commonly
numbers,butmayalsobefunctions,operatorsorevenmatrices(calledsub
matrices)themselves.

A= A

= A

mn
= a
ij

= a
ij

mn
=
a
11
a
12
a
13
L a
1n
a
21
a
22
a
23
L a
2n
a
31
a
32
a
33
L a
3n
M M M M
a
m1
a
m2
a
m3
L a
mm

(
(
(
(
(
(
(
order

m= n
square matrix

a
ij
=0i , j
null matrix, O
identity matrix, I

a
ij
=0: i = j
a
ij
=1: i = j


diagonal elements

1 0 0
0 1 0
0 0 1

(
(
(
Matrix?
square matrix
symmetric matrix (m = n = 6)

a
ij
=a
ji
( )
banded matrix
sparse matrix

a 0 0
d b 0
e f c

(
(
(

a d e
0 b f
0 0 c

(
(
(
lower triangular matrix, L upper triangular matrix, U
sub-matrices
Type of matrix?
4 -1
-1
-2
-2
9
9
5
7
4 -1
-2
-2
-1
1
1
0
0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0 0
0

A= a
ij

mn
=
b
ij

i p
c
ij

l np
( )
d
ij

ml
( )
p
e
ij

ml
( )
np
( )

(
(
(
(
mn
=
B C
D E

(
Partitioning:
Avectorisasimplearrayofscalarquantities,typicallyarrangedinavertical
column.
Hence,thevectorcanbevisualisedasamatrixoforderm1,wherethe
numbermiscalledthedimensionofthevector.Thescalarentriesofavector
arecalledcomponentsofthevector.

V= V
{ }
m
= v
i
{ }
= V
{}
m1
= v
ij
{ }
m1
=
v
1
v
2
v
3
M
v
m

Whatisavector?Isitatypeofmatrix?

a
1
a
2
a
3
L a
n

(
row vector
Wecanvisualizeamultidimensionallinearvectorspace,
m
,whose
dimensionmisgivenbytheminimumnumberoflinearlyindependent
vectors(withrealcomponents)requiredtospanthespace.
Vectorsaresaidtospanavectorspace,ifthespaceconsistsofallpossible
linearcombinations ofthosevectors.Anysetofvectorsthatarelinearly
independentandalsospanthevectorspaceiscalledabasisofthevector
space.
0
x
y
z
V

V=
2
1
3

(
(
(
canbevisualisedin
3
vectorspaceas

V=2

j +3

k
unitvectors

1
0
0

(
(
(
'
0
1
0

(
(
(
and
0
0
1

(
(
(
provideanorthogonalbasisin
3
vectorspace.
havingamagnitudeorlength,

V = v
i
2
i =1
m

= 14
Asetofvectors,{V
1
,V
2
,,V
n
},havingthesamedimensionm,issaidtobe
linearlyindependentifnolinearcombinationofthem(otherthanthezero
combination)resultsinazerovector;i.e.,c
1
= c
2
= = c
n
= 0,if.

c
i
V
i
i =1
n

=0
ELEMENTARYMATRIXOPERATIONS
ScalarMultiplication:

A= a
ij

mn
= a
ij

mn
= A
MatrixAddition:

A+B= a
ij

mn
+ b
ij

mn
= a
ij
+b
ij

mn

AB= a
ij

mn
+ 1
( )
b
ij

mn
= a
ij
b
ij

mn

A+B=B+A

A+ B+C
( )
= A+B
( )
+C
MatrixMultiplication:

a
ij

mn
b
ij

np
= c
ij

mp
AB =C;i.e.,

L L L L L
L L L L L
a
i 1
a
i 2
a
i 3
L a
in
L L L L L
L L L L L

(
(
(
(
(
(
mn
M M b
1 j
M M
M M b
2 j
M M
M M M M M
M M b
nj
M M

(
(
(
(
(
np
=
M
M
L L c
ij
L
M
M

(
(
(
(
(
(
mp
i
th
row
j
th
column j
th
column
i
th
row

A B+C
( )
= AB+AC

A BC
( )
= AB
( )
C

B+C
( )
A=BA+CA
AO= O
AI= A

2 1
3 4
1 2

(
(
(
32
1 4
2 0

(
22
=
0 8
11 12
5 4

(
(
(
32

mn

B

np

C

mp

2
3
1

(
(
(
1

+
1
4
2

(
(
(
2

=
EverycolumnvectorofC isalinear
combinationofthecolumnvectorsof
thepremultiplyingmatrixA!

= 3

1 4

+ 4

2 0

EveryrowvectorofC isalinear
combinationoftherowvectorsofthe
postmultiplyingmatrixB!
Matrix multiplication operation does not
possess the property of commutativity;
i.e, in general, AB BA
TRANSPOSEOFAMATRIX
Transposition is an operation in which the rectangular array of the matrix is rearranged
(transposed) such that the order of the matrix changes fromm n to n m, with the
rows changed into columns, preserving the order. If the original matrix is A =[a
ij
]
mn
,
then the transpose of A, which is denoted as A
T
, is given by A
T
= [a
ij
]
T
= [a
ji
]
nm
.
Transposition is an operation in which the rectangular array of the matrix is rearranged
(transposed) such that the order of the matrix changes fromm n to n m, with the
rows changed into columns, preserving the order. If the original matrix is A =[a
ij
]
mn
,
then the transpose of A, which is denoted as A
T
, is given by A
T
= [a
ij
]
T
= [a
ji
]
nm
.
(A
T
)
T
=A
(A)
T
=A
T
(A+B)
T
=A
T
+B
T
(AB)
T
=B
T
A
T
S
T
=A
T
(A
T
)
T

nm
T
A

mn
= S

nn
Square matrix
=A
T
A=S

1 2 3
2 0 1

(
(
1 2
2 0
3 1

(
(
(
=
14 1
1 5

(
(
Symmetric matrix
(s
ji
= s
ij
)
A
T
=A
T
(i.e., a
ji
= a
ji
) Skew - Symmetric
Theproduct{F}
T
{D}= resultsinamatrixoforder1 1,whichisnothing
butascalar.Suchaproduct,whichissometimesdenotedas,is
calledinnerproduct (dotproductinvectoralgebrainvolvingtwo and
threedimensionalvectors).Thisproductessentiallyreflectstheprojected
lengthofonevectoralongthedirectionoftheothervector.The
magnitudeofanyvectorV,definedasmaybeviewedasthe
innerproductofVwithitself,i.e.,.
Theproduct{F}
T
{D}= resultsinamatrixoforder1 1,whichisnothing
butascalar.Suchaproduct,whichissometimesdenotedas,is
calledinnerproduct (dotproductinvectoralgebrainvolvingtwo and
threedimensionalvectors).Thisproductessentiallyreflectstheprojected
lengthofonevectoralongthedirectionoftheothervector.The
magnitudeofanyvectorV,definedasmaybeviewedasthe
innerproductofVwithitself,i.e.,.

F,D

V = v
i
2
i =1
m

V,V = V
T
V
orthonormal
vectors

X

i
T
X

j = o
ij
=
1 if i = j
0 if i = j

D,F = D
{ }
T
k

D
{ }
=

D,F
T
= D
{ }
T
k

D
{ }
|
\
|
.
T
= D
{ }
T
k

T
D
{ }
=
[k] is symmetric !
Commutative property
of inner product:

F
{}
T
D
{ }
= D
{ }
T
F
{}
=

F
{}
m1
= k

mm
D
{ }
m1
Therankofthematrix[A]isequaltothenumberoflinearly
independentcolumnvectorsofthematrix,andthisnumberisidentical
tothenumberoflinearlyindependentrowvectors.
Themaximumvalueoftherankr ofanymatrixofordermn isgiven
bymorn(whicheverislower),andtheminimumvalueis1.
Linearsimultaneousequations:

a
ij
X
j
j =1
n

= c
i
i = 1,2,.....,m
( )

mn
X
{ }
n1
= C
{ }
m1
coefficientmatrix
RANKOFAMATRIX
Thesubspaceinthevectorspace
m
containingalllinearcombinations
oftheindependentcolumn(orrow)vectorsiscalledthecolumnspace
(orrowspace)ofA,andthissubspacehasadimensi0nequaltorankr.

A=
1 2 3
2 1 3
3 1 4
4 2 6

(
(
(
(
(
Rank of A = 3 or 2 or 1 ?
Sum of first two columns! Rank = 2.
C=O
(homogeneous
equations)
nullspaceofA
(allpossible
solutionsofX)
RowReducedEchelonForm

mn
=
I

r r
F

r nr
( )
O

mr
( )
r
O

mr
( )
nr
( )

(
(
(
Arelativelyeasyandcertainwayofdeterminingtherankofamatrixisby
reducingthematrixtoarowreducedechelonformR throughaprocessof
elimination(transformingA ascloselyaspossibletoanidentifymatrixI inthe
upperleftcorner).
Free variable
coefficient matrix

A =
2 4 6
2 1 3
3 1 4
4 2 6

(
(
(
(
(


1 2 3
0 1 1
0 0 0
0 0 0

(
(
(
(
(
pivot


1 2 3
0 3 3
0 5 5
0 6 6

(
(
(
(
(


1 0 1
0 1 1
0 0 0
0 0 0

(
(
(
(
(

=R =
I F
O O

(
Rank r = 2

2 4 6
2 1 3
3 1 4
4 2 6

(
(
(
(
(
X
1
X
2
X
3

(
(
(
=
c
1
c
2
c
3
c
4

(
(
(
(
(
AX=C:
LINEARSIMULTANEOUSEQUATIONS
AX=C

r r
F

r nr
( )
O

mr
( )
r
O

mr
( )
nr
( )

(
(
mn
X
pivot

r 1
X
free

nr
( )
1

(
(
(
n1
=
D
pivot

r 1
D
zero

mr
( )
1

(
(
m1
pivotvariables pivotrowconstants
freevariables zerorowconstants

X
pivot
{ }
= D
pivot
{ }
F

X
free
{ }
X = X
p
+ X
n
D
zero
{ }
= O
{ }

Case1: r < m and r < n


must be satisfied by C for a feasible solution set.

2 4 6
2 1 3
3 1 4
4 2 6

(
(
(
(
(
X
1
X
2
X
3

(
(
(
=
c
1
c
2
c
3
c
4

(
(
(
(
(
AX=C
RX=D

1 0 1
0 1 1
0 0 0
0 0 0

(
(
(
(
(
X
1
X
2
X
3

(
(
(
=
c
1
+4c
2
( )
6
c
1
c
2
( )
3
c
3
+ c
1
10c
2
( )
6
2c
2
+c
4
( )

(
(
(
(
(
(
RX=D
zero
zero

c
3
=
10c
2
c
1
6
c
4
= 2c
2
particular + null space solution

2 4 6
2 1 3
3 1 4
4 2 6

(
(
(
(
(
X
1
X
2
X
3

(
(
(
=
a
b
(10ba) / 6
2b

(
(
(
(
(
AX=C

2 4 6
2 1 3
3 1 4
4 2 6

(
(
(
(
(
X
1
X
2
X
3

(
(
(
=
0
3
5
6

(
(
(
(
(

1 2 3
0 3 3
0 5 5
0 6 6

(
(
(
(
(
X
1
X
2
X
3

(
(
(
=
0
3
5
6

(
(
(
(
(

1 0 1
0 1 1
0 0 0
0 0 0

(
(
(
(
(
X
1
X
2
X
3

(
(
(
=
2
1
0
0

(
(
(
(
(
For a feasible solution space

X
pivot
{ }
= D
pivot
{ }
F

X
free
{ }
D
zero
{ }
= O
{ }
Particularsolution:LetX
3
=3

X
1
= 2 X
3
X
2
= 1 X
3

X
1
X
2
X
3

(
(
(
p
=
2
1
0

(
(
(
Nullspacesolution(C =O);LetX
3
=

X
1
X
2
X
3

(
(
(
n
=

(
(
(
Complete
solution:
X =X
p
+X
n

X
1
X
2
X
3

(
(
(
n
=
2
1
+

(
(
(
CoefficientmatrixAhasafullcolumnrank(r=n),buttherearelinearly
dependentrows (r<m),whichalsoimpliesthatthenumberofrowsexceedsthe
numberofcolumns(m>n).Wehavemoreequationsthanunknowns,andwe
needtoensurethattheequationsareconsistent(linearlydependent)fora
solutiontobepossible;i.e.,hastobesatisfied.

Case2: r = n < m

D
zero
= 0

X =D
pivot

R =
I
O

mn
=
I

r r
F

r nr
( )
O

mr
( )
r
O

mr
( )
nr
( )

(
(
(

r r
O

mr
( )
r

(
(

X
pivot
{ }
= D
pivot
{ }
F

X
free
{ }
D
zero
{ }
= O
{ }
must be satisfied by C for a feasible solution set.

A|C

=
2 4 6 c
1
2 1 3 c
2
3 1 4 c
3
4 2 8 c
4

(
(
(
(
(

1 2 3 c
1
/ 2
0 3 3 c
1
c
2
0 5 5 3c
1
/ 2 c
3
0 6 4 2c
1
c
4

(
(
(
(
(
shouldbezero shouldbezero

1 0 1 c
1
+ 4c
2
( )
6
0 1 1 c
1
c
2
( )
3
0 0 2 2c
2
+ c
4
( )
0 0 0 c
3
+ c
1
10c
2
( )
6

(
(
(
(
(
(
(rows 3 and 4 interchanged)

2 4 6
2 1 3
3 1 4
4 2 8

(
(
(
(
(
X
1
X
2
X
3

(
(
(
=
a
b
(10ba) / 6
c

(
(
(
(
(
AX=C

1 0 0
0 1 0
0 0 1
0 0 0

(
(
(
(
(
X
1
X
2
X
3

(
(
(
=
2
1
0
0

(
(
(
(
(

X
{ }
= D
{ }

X
1
X
2
X
3

(
(
(
p
=
2
1
0

(
(
(
For a feasible solution space

c
3
= c
1
10c
2
( )
6

RX =D
I
O

(
X
O

(
=
D
O

2 4 6
2 1 3
3 1 4
4 2 8

(
(
(
(
(
X
1
X
2
X
3

(
(
(
=
0
3
5
6

(
(
(
(
(

Case3: r = m< n

R= I F

mn
=
I

r r
F

r nr
( )
O

mr
( )
r
O

mr
( )
nr
( )

(
(
(
I

r r
F

r nr
( )

(
CoefficientmatrixAhasafullrowrank(r=m),buttherearelinearly
dependentcolumns(r<n),whichalsoimpliesthatthenumberofcolumns
exceedsthenumberofrows(n>m).Wehavemoreunknownsthanequations,
whichisasituationweencounterinstaticallyindeterminatestructures.
Owingtotheabsenceofzerorowvectors,thereisnoconstraintonthe
constantvectorC,andasolutioniscertainlypossible.
However,astherearefreevariablespresent,thenullspacesolutionhas
infinitepossibilities (asinCase1)inthecompletesolution.

X
pivot
{ }
= D
pivot
{ }
F

X
free
{ }
X = X
p
+ X
n
D
zero
{ }
= O
{ }

1 2 3 4
2 1 1 2
3 3 4 8

(
(
(
X
1
X
2
X
3
X
4

(
(
(
(
(
=
c
1
c
2
c
3

(
(
(


1 2 3 4
0 3 5 6
0 3 5 4

(
(
(
X
1
X
2
X
3
X
4

(
(
(
(
(
=
c
1
2c
1
c
2
3c
1
c
3

(
(
(


1 0 1 3 0
0 1 5 3 2
0 0 0 2

(
(
(
X
1
X
2
X
3
X
4

(
(
(
(
(
=
c
1
+2c
2
( )
3
2c
1
c
2
( )
3
c
1
c
2
+c
3
( )

(
(
(
(
AX=C

1 0 0 1 3
0 1 0 5 3
0 0 1 0

(
(
(
X
1
X
2
X
4
X
3

=
c
1
+2c
2
( )
3
5c
1
+2c
2
( )
3c
3
c
1
c
2
+c
3
( )
2

(
(
(
(
=
d
1
d
2
d
3

Interchange third and fourth columns to preserve the identity matrix on the left.
I F

%
X
{ }
= D
{ }
RX=D

c
1
c
2
c
3

(
(
(
=
3
6
2

(
(
(

d
1
d
2
d
3

(
(
(
=
3
7
3.5

(
(
(
It is evident that infinite solutions are possible.

1 2 3 4
2 1 1 2
3 3 4 8

(
(
(
X
1
X
2
X
3
X
4

(
(
(
(
(
=
c
1
c
2
c
3

(
(
(
AX=C

X
pivot
{ }
= D
pivot
{ }
F

X
free
{ }
Particularsolution:LetX
3
=0

X
1
X
2
X
3
X
4

p
=
3
7
0
3.5

Nullspacesolution(D =O);LetX
3
=

X
1
X
2
X
3

(
(
(
=
1/ 3
5/ 3
0

(
(
(

I F

%
X
{ }
= D
{ }

1 0 0 1 3
0 1 0 5 3
0 0 1 0

(
(
(
X
1
X
2
X
4
X
3

=
3
7
7 2

X = X
p
+ X
n
=
3
7
0
3.5

(
(
(
(
+
3
5 3

(
(
(
(
=
3+ 3
75 3

3.5

(
(
(
(
Completesolution:
Inthiscase,thecoefficientmatrixAhasafullcolumnrank(r=n)aswellasafull
rowrank (r=m),whichalsoimpliesthatthematrixisasquarematrix(m=n).
Suchamatrixissaidtobeinvertibleor nonsingular.

Case4: r = m= n

R =I

unique solution, X =D
EliminationTechniqueforsolvingAX=C
InthetraditionalGausseliminationprocedure,itissufficienttoreducethe
coefficientmatrixAtoanuppertriangularformUforthispurpose,while
carryingouttheelementarymatrixoperationsontheaugmentedmatrix
IfAis asquarematrixandtheequationsareconsistent,theuniquesolutioncan
beobtainedbybacksubstitution.
However,bygoingafewstepsfurther,theAmatrixcanbereducedtotherow
reducedechelonformR,andthecompletesolution,ifany,canbedirectly
obtained.

A | C

R =
I F
O O

(
WhenthematrixAissquareandoffullrank,analternativeapproachto
solvingtheequationsisbytheoperationofinversion.
WhenthematrixAissquareandoffullrank,analternativeapproachto
solvingtheequationsisbytheoperationofinversion.
AX = C
MATRIXINVERSION

X
{ }
= A

1
C
{ }

AA
1
= A
1
A =I
BasicPropertiesofInverseofaMatrix:

A
1
( )
1
= A
A
T
( )
1
= A
1
( )
T
A
( )
1
=
1

A
1
AB
( )
1
=B
1
A
1
DeterminantofaMatrix
Ingeneral,itcanbeprovedthattheinverseexists,ifascalarproperlycalled
thedeterminantofthesquarematrixA,denotedasordetA or,is
notequaltozero.
Foradiagonalmatrix,thedeterminantisgivenbytheproductofallthe
diagonalelements.

A A
( )
Perhapsthesimplestwayoffindingthedeterminantofanysquarematrixisby
applyingtheprocessofelimination,reducingthematrixtoanupperdiagonal
formU.Thedeterminantisdirectlyobtainableastheproductofthepivot
elementsinthediagonal (ve signifoddno.ofrowexchangesinvolved).
For a matrix A of order 2 2, the determinant is given by:

det A =
a
11
a
12
a
21
a
22
= a
11
a
22
a
21
a
12
( )
Clearly,anonzerodeterminantispossibleforagivensquarematrixAonlyif
therearepivotelementsinalltherowsofthematrix;i.e.,thematrixhasto
haveafullrankforittobenonsingularandinvertible.

det A = a
ij
o
ij
j =1
n

fori = 1,2,3,...,n
where isthecofactoroftheelementa
ij
,whosevalueisgivenbythe
determinant
ij
ofthe(n1)(n1)matrixobtainedbydeletingthei
th
rowandj
th
columnofthematrixA,withtheappropriatesign(positiveornegative).

o
ij
= 1
( )
i + j
|
ij

det A = a
11
o
11
+a
12
o
12
+...a
1n
o
1n

det A = a
11
a
22
a
23
a
32
a
33
a
12
a
21
a
23
a
31
a
33
+a
13
a
21
a
22
a
31
a
32

det A
m
= det A
( )
m
det A =
n
det A
( )
for A

nn

det A
T
= detA
AdjointMethodofFindingInverse
Ingeneral,foranysquarematrixofordern n,provided,detA 0,itcanbeshownthat
whereiscalledtheadjointmatrixofA,whichisthetransposeofamatrixwhose
elementscomprisethecofactors
ij
ofA.Thistechnique,however,becomescumbersome
whentheorderofthematrixexceedsthreeorfour.

A
1
=
1
detA
%
A

a b
c d

(
1
=
1
ad bc
d b
c a

a sym
b c
d e f

(
(
(
1
=
1
a cf e
2
( )
b bf de
( )
+d be cd
( )
cf e
2
( )
sym
de bf
( )
fa d
2
( )
be cd
( )
bd ae
( )
ac b
2
( )

(
(
(
(

7 3 2 3 3 8
2 3 7 3 3 8
3 8 3 8 3 8

(
(
(
1
=
1
1.40625
0.734375 0.109375 0.625
0.109375 0.734375 0.625
0.625 0.625 5.0

(
(
(
Algorithmbasediterativemethodsoffindinginverse:
GaussJordanEliminationMethod
LDLTDecompositionMethod
Cholesky DecompositionMethod

Cramersruleissuitableforsolvingasmallnumberofsimultaneousequations.
Itrequiresgenerationofn+1determinants,whichiscumbersomebyalgebraic
formulation.Eliminationbasedalgorithmicmethodsaremuchbettersuitedfor
computerapplication.
CramersRule
Thesolutiontoasetofconsistentequations,
canbeshowntobegivenby:

A

X
{ }
= C
{ }

X
i
=
a
11
a
1, j 1
c
1
a
1n
M
a
n1
a
n, j 1
c
n
a
nn
a
11
a
1, j 1
a
1 j
a
1n
M
a
n1
a
n, j 1
a
nj
a
nn
i = 1,2,.....,n
( )
Thestabilityofiterativesolutionprocedures(formatrixinversion)andthe
accuracyoftheendresultdependontheconditionofthematrix,whichisa
measureofitsnonsingularity.Amatrixissaidtobeillconditioned(ornear
singular)ifitsdeterminantisverysmallincomparisonwiththevalueofits
averageelement,andsuchamatrixisvulnerabletoerroneousestimationofits
inversebyiterativesolvers.
Stiffnessmatricesarerelativelywellconditionedandhavethepropertyofpositive
definiteness,withdiagonaldominance,wherebytheirinversescanbestablyand
accuratelygenerated.
ConditionofaMatrix
ThesquarematrixAofordern issaidtobe positivedefinite,ifforanyarbitrarychoiceof
anndimensionalvectorX,theproductX
T
AXyieldsascalarquantitythatisinvariably
positive.Alltheeigenvalues ofsuchamatrixarereal,distinctandspacedwellapart.
ThesquarematrixAofordern issaidtobe positivedefinite,ifforanyarbitrarychoiceof
anndimensionalvectorX,theproductX
T
AXyieldsascalarquantitythatisinvariably
positive.Alltheeigenvalues ofsuchamatrixarereal,distinctandspacedwellapart.

X
T
AX>0

Das könnte Ihnen auch gefallen