Beruflich Dokumente
Kultur Dokumente
Course notes
September 28, 2015
Cristian Neculaescu
Bucharest University of Economic Studies, Faculty of Cybernetics, Departament of
Matematics, Cybernetics Building (Calea DorobanT i 1517, sector 1), room 2625, Office
hours: to be announced September 28, 2015
E-mail address: algecon2011@gmail.com
URL: roedu4you.ro, math4you section
Dedicated to the memory of S. Bach.
2000 Mathematics Subject Classication. Primary 05C38, 15A15; Secondary 05A15, 15A18
Contents
Preface
Part 1.
vi
Prerequisites
Chapter 1. Vector Spaces and Vector Subspaces
1.1. Introductory denitions
1
2
5
5
Terminological Dierences
18
Changing the scalar multiplication may change the properties of the vector space
20
21
21
24
1.4. Exercises
25
26
32
55
64
69
71
72
76
78
79
79
81
82
82
86
102
iv
CONTENTS
105
2.6.3. Examples
107
126
141
3.1. Ortogonality
147
150
152
154
156
161
161
163
167
172
173
175
175
180
Chapter 4. A ne Spaces
183
4.1. Denitions
183
4.1.1. Properties
185
Part 2.
191
Chapter 5. Geogebra
193
Chapter 6. CARMetal
195
Part 3.
197
Appendices
Chapter 7. Reviews
199
Binary Logic
199
203
7.2. Sets
203
204
CONTENTS
7.2.2. Relations
205
7.2.3. Functions
207
208
209
210
210
210
215
217
7.6. Determinants
222
222
224
224
228
231
237
237
241
243
245
246
247
248
Bibliography
249
vi
CONTENTS
Preface
Part 1
Prerequisites
These notes are a continuation of the topics learned at the high school level. In particular, the following
topics are supposed to be known:
Elements of Set Theory
Important Sets of Numbers: N, Z, Q, R, C (and their basic properties)
Elements of Binary Logic
Polynomials
Exponents
Rational and Radical Expressions
Functions
denition, injectivity (onetoone functions), surjectivity (onto functions), bijective (one
one) and invertible functions.
Elementary functions:
Linear functions
Quadratic functions
Polynomial functions
Rational functions
Exponential functions
Logarithmic functions
Trigonometric and Inverse Trigonometric functions
Graphs of Elementary functions
Complex numbers:
modulus and argument,
trigonometric form,
DeMoivres theorem
nth Roots of Complex Numbers
The Principle of Mathematical Induction
Basic Discrete Mathematics:
Permutations and Combinations
The Binomial Theorem
2D Analytic Geometry
Equation of a Line, slope,
The Circle
PREREQUISITES
The Ellipse
The Hyperbola
The Parabola
Linear and Nonlinear Equations and Inequalities
Basic Linear Algebra:
Matrix Algebra, rank, Determinants (of an arbitrary dimension), inverse
Systems of Linear Equations
Cramers Rule
RouchCapelli [KroneckerCapelli] Theorem
Basic Abstract Algebra:
operation (law),
closure,
monoids,
groups,
rings,
elds,
morphisms.
0.0.1. Exercise. Show that in a monoid with neutral element the neutral element is unique.
0.0.2. Solution. Consider a monoid (M; ) and suppose there are two neutral elements with respect to
" ", denoted by e and f . Then (by the denition of the neutral element):
(1) 8x 2 M , x e = e x = x, and
(2) 8x 2 M , x f = f
x=x
e = e f = f and e f = f
x0 x000 = x000 x0 = e.
Then x00 = x00 e = x00 (x0 x000 ) = (x00 x0 ) x000 = e x000 = x000 . So the symmetric element is unique.
CHAPTER 1
+K ( ; ) : K
(; ):K
K ! K. The function
+V ( ; ) : V
V ! V. The function
of vectors with scalars and will be denoted by ". For each xed
operation V 3 x 7!
2 K, the partial
[changing this operation may lead to changing the specics of the vector space see the
second subsection].
The pair (V; K) (together with the above operations) is called vector space if the following conditions
are met:
(1) (K; +K ;
K)
is a commutative eld;
b) x (mixed associativity);
(d) 1K x = x.
1 In
strict sense, the elements of V are position vectors. The names vector, position vector, point, should be used with
care. At the end of this section a subsection on this topic will be included.
1.1.2. Remark. The distinction between dierent operations will be made from context;
The element 0V is the neutral element with respect to vector addition and it will be denoted by
0;
The element 0K is the neutral element with respect to scalar addition and it will be denoted by 0;
The element 1K is the neutral element with respect to scalar multiplication and it will be denoted
by 1;
Notation conventions:
vectors will be denoted by small latin letters (a, b, c, u, v, w, x, y, z, etc.),
scalars will be denoted by small greek letters ( , , , , etc.),
sets will be denoted by "doubled" (blackboard bold font) capital letters (A, B, C, D, K, L,
M, N, R, X, V, etc.)
all the above may have subscripts (
0,
(2)
27
pw + 2pr .
1. Find the equilibrium prices (the prices which equate demand and supply for both goods).
A tax tw per bushel is imposed on wheat producers and a tax tr per bushel is imposed on rye producers.
2. Find the new prices as functions of taxes.
3. Find the increase of prices due to taxes.
4. Show that a tax on wheat alone reduces both prices.
5. Show that a tax on rye alone increases both prices, with the increase in the rye price being greater
than the tax on rye.
1.1.4. Solution. Equilibrium takes place when demand meets supply:
8
< Dw = S w
: D =S
r
r
8
<
()
10pw + 7pr = 7 + pw
: 3 + 7p
w
8
< p0 = 219
w
13
with solution:
: p0 = 306
5pr =
27
pw + 2pr
16:84615385
23:53846154
So 1. the equilibrium prices are p0w = 219
and p0r =
13
r
pr
13
4
13
42
13
306
.
13
(for rye)]
When taxes are imposed to producers, the supply of both goods will take place at prices lowered by
the taxes:
the demand remains unchanged: Dw = 4
5pr ,
while the supply takes place at prices lowered by the taxes: Sw = 7 + (pw
Sr =
27
(pw
tw ) + 2 (pr
tw )
(pr
tr ) and
tr )
Equilibrium
again happens
when demand meets supply:
8
8
< Dw = Sw
< 4 10pw + 7pr = 7 + (pw tw ) (pr tr )
()
,
: D =S
: 3 + 7p
5pr = 27 (pw tw ) + 2 (pr tr )
w
r
r
8
1
< p w = 9 tr
t + 219
13
13 w
13
with solution:
: p = 14 t
3
t + 306
13 r
13 w
13
2.
2
3
2 The3new2prices as functions 3of taxes:
219
9
1
219
p
t
t + 13
5 = 4 13 5 +
4 w 5 = 4 13 r 13 w
3
306
14
306
t
t + 13
pr
13 r
13 w
13
3. 0
The2increase
of
prices
due
to
taxes
is:
2 31
3
1
13
@tr 4
tw 4
1
13
5A
3
14
4. A tax on wheat alone means2that 3
tr = 2
0;
pw
5=4
the prices in this situation are 4
pr
prices.
219
13
306
13
0 2
@tr 4
3
5
14
1
t
13 w
3
5
2
4
1
3
tw 4
1
3
31
5A
1.1.5. Example. A farmer has 45 ha to grow wheat and rye. She may sell a maximum amount of 140
tons of wheat and 120 tons of rye. She produces 5 tons of wheat and 4 tons of rye for each hectare and
she may sell the production for 30 e/ton (wheat) and 50 e/ton (rye). She needs 6 labor hours (wheat)
and 10 labor hours (rye) for each hectare to harvest the crop, for which she pays 10 e/lh and which is no
more than 350 lh. Find the maximum prot she may obtain and the necessary strategy for obtaining it.
xw = hectares used for wheat
xr = hectares used for rye
prot: 30 5 xw + 50 4 xr
xw + xr
10 10 xr
45
5xw
140
4xr
120
6xw + 10xr
xw , xr
10 6 xw
350
) the problem:
maximize:
subject to:
8
>
>
>
>
>
>
>
>
>
>
<
90xw + 100xr
xw + xr
45
xw
28
xr
30
>
>
>
>
>
3xw + 5xr 175
>
>
>
>
>
:
xw ; xr 0
graphically by the position vector with the origin at the point O (0; 0) and with the edge (or
terminal point) at the point P1 (1; 2).
2 3
2 3
3
1
the vectors: v1 = 4 5, v2 = 4 5, v1 , v2 2 R2 ;
1
2
2 3 2 3 2 3
4
3
1
the sum v1 + v2 = 4 5 + 4 5 = 4 5 = v3 ;
3
1
2
3
2
3
5;
the opposite vector for v2 is v2 = v4 = 4
1
the multiplication of the vector with a scalar v6 = 2; 6 v1 ;
the subtraction
with the opposed of the second vector: v1
2 of 3two 2vectors
3 is2 the addition
3
1
3
2
5 = v5 ;
v1 + ( v2 ) = 4 5 4 5 = 4
2
1
1
v2 =
the vector space (R3 ; R) (the 3D Euclidean space); within this environment, a vector is identied
2 3
1
6 7
6 7
by the position vector of the corresponding point; for example, the vector 6 2 7 is represented
4 5
1
by the position vector starting at O (0; 0; 0) and ending at P1 (1; 2; 1).
10
6 7
6 7
6 7
6 7
the vectors: v1 = 6 2 7, v2 = 6 1 7, v1 ,
4 5
4 5
2
1
2 3 2 3 2
1
3
4
6 7 6 7 6
6 7 6 7 6
the sum v1 + v2 = 6 2 7 + 6 1 7 = 6 3
4 5 4 5 4
1
2
3
The opposite vector for v2 is
v2 2 R3 ;
3
7
7
7 = v3 ;
5
7
7
1 7;
5
2
= 2; 6 v ;
3 2 13 2
3
3
2
7 6 7 6
7
7 6 7 6
7
7 6 1 7 = 6 1 7 = v5 ;
5 4 5 4
5
2
1
6
6
v2 = v4 = 6
4
1.1.8. Remark. There are various similarities and dierences between 2D and 3D:
The points and the position vectors are represented by ordered lists of numbers.
There is a certain ambiguity regarding the notions "point", "vector", "position vector". In order
to clear these ambiguities we have to cover another chapter named "A ne Geometry" [see, for
example, [26]]. For now it is enough to observe that in an environment called "vector space",
a "vector" which geometrically would have as origin some point other than the origin of the
coordinate system simply doesnt exist.
11
Ambiguities may also be found not only in the common language, but also in the scientic language: for example, the expression "x = 0" in R1 means the point with coordinates (0), in R2 it
means a line (the horizontal axis), in R3 it means a plane (the yOz plane), and so on.
A 2D line may be represented as "the set of all the solutions (x0 ; y0 ) of the equation ax+by+c = 0"
[with not all of a, b, c nulls]. This representation may be considered convenient for 2D, but in a
3D environment the similar representation "the set of all the solutions (x0 ; y0 ; z0 ) of the equation
ax + by + cz + d = 0" [with not all of a, b, c, d nulls] represents a plane and not a line; a line
could be viewed as an intersection between two planes (algebraically as the set of all solutions of
a system of two equations with three variables) and this could be generalized for n dimensions
(as the set of all solutions of a system of n equations with n
convenient anymore.
An alternative convenient representation is the parametric representation:
Consider the points P1 and P3 with the corresponding position vectors v1 and v3 . The line
between P1 and P3 is the set of all points corresponding to the position vectors v(
(1
) v1 = v1 + (v3
v1 ),
v3 +
2 R.
Consider for example the points P1 (1; 2) and P3 (4; 3); the line between them has the equation:
x 1
y 2
=
) x 1 = 3 (y 2) ) x 3y + 5 = 0;
4 1
3 2
observe that
8 "x 3y + 5 = 0" is a linear system with one equation and two variables, with
< x = 3t 5
the solution
: y = t; t 2 R:
2 3
2 3
1
4
The position vectors for the points are v1 = 4 5 and v3 = 4 5;
2
3
the points of the line P1 P3 are exactly those points with position vectors given by
v(
= v1 + v2 [= v1 + (v3 v1 )] [= (1
) v1 + v3 ] =
3
02 3 2 31 2
3
1
4
1
3 +1
5, 2 R,
= 4 5 + @4 5 4 5A = 4
2
3
2
+2
)
2
8
< x = 3t 5
which is just another way to describe the general solution
:
: y = t; t 2 R
t=
+2)y =t=
+ 2, x = 3t
5 = 3 ( + 2)
5 = 3 + 1.
12
The Point
Remarks:
P1
P3
v1 + v2 = v(1)
P2
P4
1
1
2
P7
v4 = v1
P8
1)
1
3
v2 = v(
1=3)
"(P1 P3 ) \ Oy"
P9
2)
"(P1 P3 ) \ Ox"
; xn ) ; xi 2 R; i = 1; ng:
The set Rn is a real vector space with respect to the "componentwise operations":
Addition:
(x1 ; x2 ;
; xn ) + (y1 ; y2 ;
8 (x1 ; x2 ;
Def
; yn ) = (x1 + y1 ; x2 + y2 ;
; xn ) ; (y1 ; y2 ;
; yn ) 2 Rn
; xn + yn ) ;
13
(the "+" sign from the lefthand side refers to Rn addition while the "+" signs from the righthand side
are referring to R addition; we use the same graphical sign for dierent operations while the distinction
should be made from the context)
Multiplication of a vector by a scalar:
Def
(x1 ; x2 ;
; xn ) ; 8 2 R:
; xn ) = ( x1 ; x2 ;
(R; +; ) is a commutative eld; (Rn ; +) is an Abelian group (componentwise addition is associative, commutative, has a neutral element and each element has a symmetric, because of the similar properties of
R addition), the neutral element is 0Rn = (0; 0;
( x1 ; x2 ;
; xn ) is
; xn ) (the opposite vector). Moreover, when the operations meet we have distributivity
properties:
( + ) (x1 ; x2 ;
; xn ) =
= (( + )x1 ; ( + )x2 ;
; ( + )xn ) =
= ( x 1 + x 1 ; x2 + x 2 ;
; xn + x n ) =
= ( x 1 ; x2 ;
=
(x1 ; x2 ;
((x1 ; x2 ;
; xn ) + ( x 1 ; x 2 ;
; xn ) + (x1 ; x2 ;
; xn ) + (y1 ; y2 ;
(x1 + y1 ; x2 + y2 ;
) (x1 ; x2 ;
; xn ) = ((
=
(x1 ; x2 ;
)x1 ; (
( x 1 ; x2 ;
; yn )) =
; (xn + yn )) =
= ( x 1 + y 1 ; x2 + y 2 ;
; xn ) ;
; xn + yn ) =
= ( (x1 + y1 ) ; (x2 + y2 ) ;
= ( x 1 ; x2 ;
; xn ) =
; xn + y n ) =
; xn ) + ( x 1 ; x2 ;
; xn ) +
)x2 ;
;(
; xn ) =
(y1 ; y2 ;
; xn ) =
; yn ) ;
)xn ) = (
(
(x1 ; x2 ;
( x1 ) ;
( x2 ) ;
( xn )) =
; xn )):
1.1.10. Example (Vector spaces over nite elds). The last example mentions a special topic which is
the beginning for "Coding Theory" and uses Chapters 3 and 4 from [27]. Theorem T. 3.3.3, page 26 [27]
proves that for each prime p and for each positive integer n there is a unique nite eld with pn elements
(and characteristic p), denoted by Fpn . If ; =
6 V is such that (V; Fpn ) is a vector space, then it is a vector
space over a nite eld which is a framework used in "Coding Theory" and "Cryptography". The nite
eld may be, for example, (Z2 ;
2;
2)
(Z2 = f0; 1g and the laws are addition and multiplication mod 2)
and the vector space may be Zk2 ; Z2 , which is the vector space of all the linear codes of length k over Z2 .
14
V a subset of vectors. S is
x 2 S:
1.1.14. Example. The set S = f(x1 ; x2 ; x3 ) 2 R3 ; x2 = 0g is a vector subspace for (R3 ; R).
Consider x; y 2 S ) x = (x1 ; 0; x3 ) and y = (y1 ; 0; y3 );
we have x + y = (x1 ; 0; x3 ) + (y1 ; 0; y3 ) = (x1 + y1 ; 0; x3 + y3 ) 2 S,
and for
2 R, x =
(x1 ; 0; x3 ) = ( x1 ; 0; x3 ) 2 S.
From the denition of subspace it follows that S is a subspace of (R3 ; R). The set may be described in
the following way:
(x1 ; 0; x3 ) = (x1 ; 0; 0) + (0; 0; x3 ) = x1 (1; 0; 0) + x3 (0; 0; 1), so S = f (1; 0; 0) + (0; 0; 1) ; ;
1.1.15. Denition. (Linear combination) For p 2 N, i = 1; p; xi 2 V and
is called the linear combination of the vectors xi with the scalars
If, moreover, the scalars eld is K = R, with
2 K, the vector x =
i.
0, 8i = 1; p, and
p
P
2 Rg :
p
P
ix
i=1
i=1
1.1.16. Example. 2 (1; 0; 0) + 3 (0; 1; 0) is a linear combination in R3 ; its value is (2; 3; 0).
Remark: In matrix form,
2 3
2 3 2
1
0
2
6 7
6 7 6
6 7
6 7 6
26 0 7 + 36 1 7 = 6 3
4 5
4 5 4
0
0
0
v2 = 1 v1
15
1 v2 ;
of all the possible linear combinations with vectors from A is the linear covering of A (or the set spanned
by A) (or the set generated by A).
1.1.20. Example. In (R2 [X] ; R), for A = f1; Xg,
spanR A = fa 1 + b X; a; b 2 Rg
is the set of all linear combinations with the polynomials 1 and X and it is the set R1 [X].
1.1.21. Remark. The set spanK (A) depends on the set K of scalars: R3 is a vector space over each R
and Q elds, but the vector spaces (R3 ; R) and (R3 ; Q) behave dierently for example, the same set of
2 For
16
xi ; i = 1; p
p
P
ix
= 0
i=1
i ).
= 0 (the null solution). Depending on the set of vectors, the null solution may be unique or
not. The following notions will distinguish between these two cases.
1.1.23. Denition. (Linear dependence and independence) A nite set of vectors xi ; i = 1; p is called
linear dependent if at least one vector is a linear combination of the other vectors. The situation where
the set is not linear dependent is called linear independent.
xi ; i = 1; p linear dependent: 9i0 2 1; p, 9
Equivalent: 9i0 2 1; p, xi0 2 spanK
p
P
Equivalent: the vector equation
2 K, i 2 f1;
xi ; i = 1; p n fxi0 g
ix
p
P
ix
i=1
i6=i0
i=1
1.1.24. Example. The set of R4 vectors f(1; 2; 2; 1) ; (1; 3; 4; 0) ; (2; 5; 2; 1)gis linear dependent because
(1; 2; 2; 1) + (1; 3; 4; 0) = (2; 5; 2; 1).
1.1.25. Example. The set of R4 vectors f(1; 2; 2; 1) ; (1; 3; 4; 0) ; (3; 5; 2; 1)g is linear independent because when considering the vector equation
1
(1; 2; 2; 1) +
system
8
>
>
>
>
>
>
>
< 2
>
>
2
>
>
>
>
>
:
+3
+5
=0
=0
+3
(1; 3; 4; 0) +
=0
+0 2+ 3 =0
which has only the null solution.
1
1.1.26. Remark. (Equivalent denitions for linear independence) A nite set of vectors xi ; i = 1; p is
linear independent if and only if one of the statements is true:
(1) The vector equation
p
P
i=1
ix
= 0 has only the null solution (the null vector is a linear combination
(2) 8
2 K; i = 1; p; ((9i0 2 f1;
; pg ;
i0
6= 0) )
p
P
ix
i=1
17
Proof. The two statements are connected by the result from Logic which states (p ! q)
(eq !ep)
(m) +
2v
(m) +
(m; 1; 1) +
(m
1,
3v
(m) = 0,
2,
3,
(1; m; 1) +
2
3;
+m
which becomes:
+m
3)
= (0; 0; 0).
We
8 get the linear homogeneous system:
>
m 1+ 2+ 3 =0
>
>
<
1+m 2+ 3 = 0
>
>
>
:
1+ 2+m 3 = 0
Step 2: Solve the system and obtain the complete solution (dependent on the parameter m):
8
>
f(0; 0; 0)g ;
m 2 Rn f 2; 1g
>
>
<
S (m) =
f( a b; a; b) ; a; b 2 Rg ; m = 1
>
>
>
:
f(a; a; a) ; a 2 Rg ;
m= 2
Step 3 Conclusion:
for m 2 R n f 2; 1g the vector set is linear independent.
for m =
v 3 ( 2) = 0.
for m = 1 the vector set is linear dependent and a linear dependency is 2v 1 (1)+v 2 (1)+v 3 (1) = 0.
18
1.1.29. Denition. (Set of generators) Consider (V; K) and the vector set X
span
xi ; i = 1; p
If the set X is not specied, we consider that X = V (the set generates the whole vector space).
1.1.30. Example. In the vector space (R2 [X] ; R), for A = f1; Xg,
spanR A = fa 1 + b X; a; b 2 Rg
is the set of all linear combinations with polynomials 1 and X so A generates all the set R2 [X].
Terminological Dierences. The terms "vector", "position vector" or "point", although usually
used interchangeably to designate an element of a vector space, in strict sense they have dierent meaning.
The term "point" refers to an element of the set V and designates a place in space. The set V is
regarded as an a ne space and the only accepted operation with points is subtraction, which gives a
vector.
The term "vector" refers to an object dened by two points, the rst one understood as the origin of
the vector and the other one understood as the edge of the vector.
The term "position vector" refers to those vectors for which the origin is the origin (the null point) of
the space and this is the term most suited for vector spaces.
~ and
1.1.31. Example. The points P = (2; 5) and Q = (6; 2) are identied with the position vectors OP
~
OQ.
The operation P
~ with R = P
which in vector spaces does not exist) but the vector OR,
Q = ( 4; 3) (operation with
~ and OR
~ may be seen (in other settings) as congruent, they have dierent
points). Although the vectors QP
~
~ , is not an element of
origins and only one of them is a position vector (namely OR);
the other one, QP
the vector space.
19
20
Changing the scalar multiplication may change the properties of the vector space. Consider
p (X)
.
the set R [X] of all fractions of polynomials with real coe cients,
q (X)
The set R [X] is a eld together with addition and scalar multiplication.
The structure (R [X] ; R [X]) is a vector space over itself (and in this case the scalar multiplication is
the usual multiplication of fractions of polynomials), with dimension 1 (see the section 1.5 for dimension)
Consider an alternate "scalar multiplication":
X 2 v (X) =
( (X) ; v (X)) 7!
Because ( + ) (X 2 ) =
(X 2 )+ (X 2 ) and (
: R [X]
) (X 2 ) =
R [X] ! R [X]
p (X) =
n
X
ak X k =
k=0
[ n2 ]
X
a2i X 2i +
i=0
n 1
[X
2 ]
a2j+1 X 2j+1 = p1 X 2 + p2 X 2 X;
j=0
where:
p1 (X) =
[ n2 ]
X
a2i X i ; p2 (X) =
i=0
n 1
[X
2 ]
a2j+1 X j :
j=0
p (X)
p1 (X 2 ) + p2 (X 2 ) X
(p1 (X 2 ) + p2 (X 2 ) X) (q1 (X 2 ) q2 (X 2 ) X)
=
=
=
q (X)
q1 (X 2 ) + q2 (X 2 ) X
q12 (X 2 ) q22 (X 2 ) X 2
p1 (X 2 ) q1 (X 2 )
p1 (X 2 ) q1 (X 2 )
q12 (X 2 )
p1 (X 2 ) q2 (X 2 ) X + p2 (X 2 ) q1 (X 2 ) X
q12 (X 2 ) q22 (X 2 ) X 2
p2 (X 2 ) q2 (X 2 ) X 2
+
q22 (X 2 ) X 2
p2 (X 2 ) q2 (X 2 ) X 2
p1 (X 2 ) q2 (X 2 ) + p2 (X 2 ) q1 (X 2 )
X
q12 (X 2 ) q22 (X 2 ) X 2
p (X)
may be written as a linear combination of the polynomials
q (X)
2 (X)
p1 (X) q1 (X)
q12 (X)
p2 (X) q2 (X) X
q22 (X) X
and
p1 (X) q2 (X) + p2 (X) q1 (X)
;
q12 (X) q22 (X) X
21
meaning that
v=
1+
X;
which shows that f1; Xg is a generating set. While the set f1; Xg is linear independent (exercise), it
follows that the structures (R[X]; R[X]; +; ) and (R[X]; R[X]; +; ) are dierent, because of the dierent
scalar multiplication.
[Citation???]
1.1.1. Lengths and Angles. We briey mention the notions3 connecting the vector space with
Euclidean Geometry; later there is an entire dedicated chapter 3, page 141.
For two vectors x, y 2 Rn , the expression
n
P
hx; yi = x y =
xi yi
i=1
v
.
kvk
When two vectors are perpendicular, Pithagoras Theorem takes place: v ? w ) kv + wk2 = kvk2 +
A vector is called versor (unit vector) when kvk = 1. The versor of a nonzero vector v is
kwk2
Proof.
n
P
i=1
vi2 +
n
P
i=1
n
P
i=1
vi wi = 0 ) kv + wk2 =
1.1.32. Remark. kv
n
P
i=1
wi2
wk2 =
n
P
(vi
n
P
(vi + wi )2 =
i=1
n
P
i=1
wi )2 =
i=1
n
P
i=1
= kvk + kwk = kv + wk .
(vi2 + wi2
2vi wi ) =
n
P
i=1
vi2 +
n
P
i=1
n
P
i=1
vi2 +
wi2
n
P
i=1
wi2 + 2
n
P
n
P
vi wi =
i=1
vi w i =
i=1
n
P
i=1
vi2 +
[The parallelogram generated by the two vectors is a rectangle, so the two diagonals have the same
length]
For two nonzero vectors, the cosine of the angle between their directions is cos (v; w) =
hv; wi
.
kvk kwk
2 K; 8x; y 2 V, we have:
(2) 8 2 K;
(x
y) =
y, (
) x=
x;
0V = 0V ;
(3) 8x 2 V; 0K x = 0V ;
(4) 8x 2 V; ( 1K ) x =
3 It
x;
is about the default notions, in the sense that they are considered valid when no other specication was made.
22
(5)
x = 0V )
= 0K or x = 0V .
Proof. Consider ;
(1)
x=
(
(2)
2 K; x; y 2 V arbitrary chosen;
((x
y) + y) =
)x + x ) x
0V =
(x
(3) 0K x = (
0V =
) x=
(4) 0V = 0K x = (1K
(5)
x = 0V and
y)
y) +
x=(
x) =
[alternative:
(x
(x
y) =
y; x = ((
) x;
x = 0V :
(0V + 0V ) =
x
0V +
0V
add
0V
in both terms
0V =
0V ]
x = 0V :
1K ) x = (1K + ( 1K )) x = 1K x + ( 1K ) x ) ( 1K ) x =
6= 0 ) 9
) + )x =
2 K and
x=
(1K x) =
x:
0V ) x = 0V .
V)A
2 Kg
span (A)
V ) 0 2 span (A)
V ) span (A1 )
span (A2 )
1.2.3. Remark. span (xi )i=1;n is a vector subspace [In fact, the linear covering of any set not necessarily nite is a vector subspace, with a similar argument; with the convention span (;) = f0g, all the
possibilities are covered].
Proof. Consider v1 ; v2 2 span (xi )i=1;n and
2 K ) 9 i1 ;
n
n
P
P
j
x
;
j
=
1;
2;
then
v
+v
=
( i1 + i2 ) xi 2 span (xi )i=1;n and
1
2
i i
i=1
i=1
2
i
2 K, i = 1; n, such that vj =
n
P
v1 =
( i1 ) xi 2 span (xi )i=1;n
i=1
2 K; i = 1; n;
n
X
i=1
i xi
2 V0 :
23
V0 )
(a slightly stronger statement actually takes place: if V0 is a subspace, the linear covering of any subset
of the subspace is included in the subspace: 8V1
V0 , span (V1 )
V0 )
Proof. By induction over n 2 N: for n = 1, axiom 2 of the denition of a subspace it follows that
for any x1 2 V0 and for any scalar
2 K, we have
1 x1
i=1
n+1
P
i xi
n
P
i xi
n+1 xn+1
i=1
i xi
i=1
n+1 xn+1
V0 subspace
V0 (xi )
i=1;n
9
>
>
>
>
>
=
>
>
>
>
>
;
2 K; i = 1; n + 1. Then:
n+1
X
i=1
i xi
2 V0 :
vector subspaces containing the set) (the linear covering of a set is the smallest in the sense of inclusion
subspace which contains the set) (the word "smallest" is used with respect to the inclusion relation;
"smallest" in this context means that any subspace which includes the set also includes the linear covering
of the set).
Proof. span (xi )i=1;n
V0 subspace
V0 (xi )
i=1;n
the linear covering of the set (Remark (1.2.4)); the other inclusion follows from Proposition (1.2.3):
is itself a subspace including the set, and so it is within the family of subspaces which
T
are including the set, from where it follows span (xi )i=1;n
V0 because the intersection is
V0
V0
subspace
(xi )i=1;n
A2
24
n
P
i vi
i=1
diction.
By dividing with , we get w =
n
P
i=1
n
P
i vi .
i=1
vi 2 span (A [ fvg).
together with the usual algebraic operations (with functions) is a vector space.[Interesting particular cases:
X = f1;
; ng; X = N].
1.3.3. Example. The set of matrices with elements from a eld K and with m lines and n columns.
The set is denoted by Mm
(K), the vectors are matrices, vector addition is the matrix addition and the
n
P
n!1 k=0
scalar.
a2k
1
P
n=0
exists and it is nite], together with the usual sequence addition and multiplication by a
1.3.6. Example. The set of all Cauchy rational sequences (a sequence (an )n2N
sequence when 8" > 0, 9n" 2 N, 8n; m
vector space.
n" , jan
Q is called a Cauchy
1.4. EXERCISES
25
1.3.7. Example. The set of all real polynomials in the unknown t, denoted by R [t]. When p(t) =
a0 + a1 t +
+ an tn and q(t) = b0 + b1 t +
p(t) = a0 + a1 t +
1).
1.3.8. Example. The set of all real polynomials in the unknown t and with degree (in t) at most n,
denoted by Rn [t], with the usual polynomial operations.
1.3.9. Example. The set of all functions f ( ) : R ! R of class C 1 and which are solutions of the
dierential equation f 0 (t) + af (t) = 0, 8t 2 R, together with the usual function operations.
1.3.10. Example. The set of real functions which are indenitely dierentiable D1 (R; R).
1.3.11. Example. The set of all real functions with domain [a; b] and codomain R, denoted by F ([a; b] ; R).
1.3.12. Example. The set of all Lipschitzian functions from F ([a; b] ; R) (functions f ( ) : [a; b] ! R such
that 9kf > 0 with jf (x)
f (y)j 6 kf jx
1.4.1. Example. The set of vectors f(1; 1; 1), (1; 2; 3), (3; 2; 1)g generates the vector space (R3 ; R).
1.4.2. Exercise. Consider a real vector space (V; R) and the operations:
+ : (V
V)
(V
V) ! V
dened by
(x1 ; y1 ) + (x2 ; y2 ) = (x1 + x2 ; y1 + y2 )
and
:C
(V
V) ! (V
V)
dened by
( + i ) (x; y) = ( x
Show that (V
y; x + y) :
V; C) with the above operations is a complex vector space (this vector space is called the
26
1.4.3. Exercise. Show that the set of the nondierentiable functions over [a; b] is not a vector space.
1.4.4. Exercise. Show that the union of two vector subspaces is generally not a subspace.
1.4.5. Exercise. Show that the set V0 = fx 2 Rn ; Ax = 0g is a vector subspace, where A 2 Mm
(R).
1.4.6. Exercise. Consider the subspaces A = f(0; a; b; 0) ; a; b 2 Rg, B = f(0; 0; a; b) ; a; b 2 Rg. Determine A + B.
1.4.7. Exercise. Show that if the linear operator U ( ) : Rn ! Rn satises U 2 ( ) + U ( ) + I ( ) = O ( ),
then U ( ) is bijective.
1.5. Representation in Vector Spaces
1.5.1. Proposition. (Equivalent denitions for a basis) Consider a nite set of vectors denoted by B
from the vector space (V; K). The following statements are equivalent:
(1) B is linear independent and it is maximal with this property (in the sense that 8v 2 VnB, the
set B [ fvg is not linear independent anymore);
(2) B is generating V and and it is minimal with this property (in the sense that 8v 2 B, the set
B n fvg doesnt generate V anymore);
(3) B is generating V and it is linear independent.
Proof. The proof will follow the steps: (1))(2))(3))(1).
(1))(2)
Assume B is linear independent and maximal (with this property); we prove by contradiction that the
set B generates V:
Assume that B is not generating V; then 9v0 2 V n span (B) and the new set B1 = B [ fv0 g is linear
independent (because otherwise v0 would be a linear combination with B elements) and strictly includes
B, which is a contradiction with the maximality of B (with respect to the linear independence property).
The contradiction originates from the hypothesis that B doesnt generate V, so, by contradiction, we get
that B generates V.
We prove also by contradiction that B is minimal (as a generating set for V)
Assume by contradiction that B, as a generating set of V, is not minimal. Then there is at least an
element v1 2 B such that B n fv1 g still generates V (at least an element may be removed from B without
aecting the generating property); since v1 is a linear combination of B n fv1 g, it follows that the initial
set B is not linear independent, which is a contradiction with the hypothesis. Since the contradiction
originates from the hypothesis that B as a generating set is not minimal, it follows that B is minimal (as
a generating set of V).
27
(2))(3)
Since B generates V, suppose by contradiction that the set B is not linear independent: 9v2 2 B such
that v2 2 span (B n fv2 g); then the set B n fv2 g
each vector of the space is a linear combination with vectors from B; if v2 participates at this linear
combination, by replacing it with the linear combination from B n fv2 g we get a new linear combination
only with vectors from B n fv2 g), which is a contradiction.
(3))(1)
Since B is linear independent, if B is not maximal then there is a vector v 0 2 VnB such that B [ fv 0 g
is still linear independent. This means that v 0 2
= span (B), which contradicts that B generates V.
1.5.2. Denition. In a vector space (V; K) a set B
equivalent statements from Proposition 1.5.1, page 26. A basis for which the order of the vectors also
matters is called an ordered basis.
1.5.3. Denition. A vector space with at least a basis which is a nite set is called a vector space of
nite type.
A vector space which has no nite basis is called a vector space of innite type.
1.5.4. Theorem. (The su ciency of maximality in a generating set) If the nite set S generates
V and the set B0
S and B is a basis
for V.
(the proof will show that when S is nite and generates V, the maximality of B in S as a linear
independent set is enough for the maximality of B in V as a linear independent set)
Proof. We inductively obtain a set B with the properties:
a) the set B is linear independent,
b) B0
B0 (the set B0 is linear independent and it may or may not be maximal in S with
28
Because of the nitude of the set S, the procedure stops in nite number of steps.
From the procedure we get a set B satisfying the properties a), b), c) and which is not necessarily
unique, but satises the following maximality property in S: if B1 is linear independent and B
B1
S,
then B = B1 .
From the procedure for obtaining B it also follows that jBj
also, so there is at least one element among yi 2 S; i = 1; k which is not in span (B) which we denote by
Def
and we apply the previous result to obtain a set B which is a basis for V such that fv0 g
S.
1.5.6. Theorem. (The Steinitz exchange lemma) Consider a linear independent set B = fv1 ;
and a generating set S = fu1 ;
fv1 ;
; un g, both in V. Then r
; vr g
; un g generates V. (any set of linear independent vectors has less vectors than any
; vr ; ur+1 ;
generating set and the linear independent set may replace the same number of vectors from the generating
set while preserving the generating property).
Proof. Induction over the number of linear independent vectors, j = 1; r:
For j = 1, 9
2 K; i = 1; n such that
v1 =
n
X
i ui ;
i=1
are zero, then v1 would be zero (a contradiction with the linear independence property
; vr g); so at least one scalar is nonzero and, maybe with a reordering of the vectors from
; un :
(1.5.1)
u1 =
n
X
v1
Consider v 2 V arbitrary; 9
v=
n
P
i ui
1 u1
n
P
i ui
i=2
i=2
v1 +
1
n
P
i ui ;
n
P
v1
1
i=2
i=2
i
1
ui
n
P
i ui
i=2
ui ;
i 1
i=1
n
P
ui :
2 K; i = 1; n, such that v =
i=1
If r
29
; un g generates V.
n.
since n vectors from the linear independent set B already replaced n vectors (which means all the
vectors) from the generating set S, this means that the set fv1 ;
set, which means it is also maximal with respect to the linear independence property, a contradiction with
the existence of another vector vn+1 such that the set fv1 ;
So r
; un g generates V )
; vr 1 ; ur ;
9
2 K; vr =
r 1
X
i vi
i=1
n
X
i ui ;
i=r
rP1
i vi
i=1
; ng such that
i0
(1.5.2)
ur =
r 1
X
vr
because fv1 ;
n
X
vi
also generates V: 8v 2 V; 9
ui ;
2 K; i = 1; n;
v=
rP1
rP1
i vi
i=1
i vi
i=1
rP1
i=1
n
P
i=r
r
vr
r
i r
r
i ui
rP1
i=1
vi +
rP1
i vi
i=1
vi
r
i
r
r
+
n
P
i=r+1
n
P
vr +
r ur
i=r+1
; un g:
i=r+1
; vr 1 ; ur ;
i=1
; vr 1 ; vr ; ur+1 ;
n
P
i ui
i=r+1
ui
r
i
n
P
i=r+1
i r
r
ui ;
i ui
; vr ; ur+1 ;
; un g
30
which means that each vector from V is a linear combination of the set fv1 ;
; vr ; ur+1 ;
; un g (the set
generates V).
1.5.7. Corollary. In a nitetype vector space, any linear independent set may be completed up to a
basis.
Proof. Because the space is of nitetype, we have a nite generating set; by using the Steinitz
exchange Lemma, we may replace some vectors from the generating set the vectors of the linear independent
set, while preserving the generating property.
Now we have a linear independent set included in a nite generating set so we are in position to apply
the Theorem 1.5.4 to obtain a basis which contains the initial linear independent set (and which may be
considered as "completing" the initial independent set up to a basis).
1.5.8. Corollary. In a nitetype vector space, the number of vectors of any linear independent set is
smaller than the number of vectors of any generating set.
Proof. From the Steinitz Exchange Lemma.
1.5.9. Corollary. In a nitetype vector space, any two bases have the same number of elements.
Proof. Both bases may be viewed as linear independent sets and generating sets.
1.5.10. Denition. The number of vectors from any basis of a nitetype vector space is called the
dimension of the vectors space (V; K) and is denoted by dimK V.
1.5.11. Proposition. Given a basis in nitetype vectors space, any vector is a linear combination of the
basis. Moreover, the scalars participating at the linear combination are uniquely determined by the basis.
Proof. Consider a basis B = fu1 ;
; un g and v 2 V.
Because B generates V, the vector v is a linear combination of B (which proves the existence of the
scalars).
Moreover, the scalars are unique:
Assume two linear combinations: 9 i ;
n
X
i=1
i ui
2 K, i = 1; n, v =
n
X
i=1
i ui
n
X
n
P
i ui
i=1
i ) ui
n
P
i ui .
Then
i=1
= 0;
i=1
i;
31
1.5.12. Denition. Given the vector space (V; K), the basis (ordered basis) B = fu1 ;
; un g and the
vector v 2 V, the scalars from the previous result are called the coordinates (the representation) of the
vector v in the basis B.
This will be denoted with [v]B and the coordinates will be considered (by convention) as columns:
2
6 1 7
6
7
6 2 7
6
7
[v]B = 6 . 7 [matrixform representation of v in B]
6 .. 7
6
7
4
5
n
v=
n
P
[vectorform representation of v in B] :
i vi
i=1
1.5.13. Remark (Canonical bases). Each vector space has an innity of possible bases. As a convention
for commonly used vector spaces, if there is no mention about the basis in which a representation takes
place, then it is assumed that the representation is done in some (conventionally) special basis usually
called the standard basis or the canonical basis.
[The standard basis for Rn ] The set E = fe1 ;
nents, e1 = (1; 0;
j = 1; n (
ij
; 0), e2 = (0; 1; 0;
; 0),
; 0; 1); in general, ej = (
1j ;
jj ;
nj ),
[The standard basis for R2 ] The set E = fe1 ; e2 g = f(1; 0) ; (0; 1)g.
[The standard basis for R3 ] The set E = fe1 ; e2 ; e3 g = f(1; 0; 0) ; (0; 1; 0) ; (0; 0; 1)g.
[The standard basis for Rn [t]] [the set of all polynomials with real coe cients and degree at most n in
the unknown t] The set E = f1; t1 ;
; tn g.
[The standard basis for R [t]] [the set of all real polynomials in the unknown t] The set E = f1; t1 ;
; tn ;
[The standard basis for Mmn (R)] [the set of all matrices with m lines and n columns, with real
entries]
The set E = fE11 ; : : : ; Emn g where Ei0 j0 2 Mmn (R) are matrices with the general term aij =
8
< 1; (i; j) = (i0 ; j0 )
: 0; (i; j) 6= (i ; j ) :
0 0
[The standard basis
2
3
2
1 0
0 1
6
7
6
6
7
6
6 0 0 7, E12 = 6 0 0
4
5
4
0 0
0 0
The set E =
3
2
0
7
6
7
6
0 7, E22 = 6
5
4
0
1.5.14. Remark. Consider the vector space (V; K), a xed ordered basis B = (e1 ; ::; en )
n
n
n
P
P
P
u=
2 K. Then:
i ei , u1 =
i ei , u2 =
i ei 2 V and the scalars ,
i=1
i=1
i=1
u = u1 + u2
V, the vectors
g.
32
takes place if and only if the corresponding matrix relation between the vectors representations is taking
place:
[u]B =
[u1 ]B + [u2 ]B :
6 1 7
n
P
6 . 7
[u1 ]B = 6 .. 7 u1 =
4
5
i=1
n
i ei
i ei
6 1 7
n
P
6 . 7
[u2 ]B = 6 .. 7 () u2 =
i ei .
4
5
i=1
n
Then:
u = u1 + u2 ()
n
n
P
P
()
i ei =
()
i=1
n
P
i ei
i=1
i ei
i=1
n
P
i=1
n
P
i=1
i
i ) ei
i ei
()
()
()
2i
+
3i 2
i,
..
.
7
7
7=
5
6 1 7
6 .. 7
6 . 7+
5
4
n
6 1 7
6 .. 7
6 . 7 ()
4
5
n
3
0 7
7
1 7
7
7
0 7
7;
.. 7
7
. 7
5
0
3
0
6 7
6 7
6 0 7
6 7
6 . 7
. 7
; [en ]E = 6
6 . 7:
6 7
6 7
6 0 7
4 5
1
V. The
n1
vj =
7
7
7
22 7
n
7
P
7 ; v2 =
32 7
i=1
.. 7
7
. 7
5
i2 ei ;
n2
..
.
3
1m
7
7
7
7
n
7
P
7 ; vm =
7
i=1
7
7
7
5
2m
3m
..
.
nm
n
P
ij ei ;
i=1
i1 ei ;
12
6
6
6
6
6
[v2 ]E = 6
6
6
6
6
4
6
6
6
6
6
[vm ]E = 6
6
6
6
6
4
vector form
7
7
7
21 7
n
7
P
7 ; v1 =
31 7
i=1
.. 7
7
. 7
5
11
6
6
6
6
6
[v1 ]E = 6
6
6
6
6
4
33
im ei ;
8j = 1; m
6
6
6
6
6
[M (B1 )]E = 6
6
6
6
6
4
11
12
1m
21
22
2m
31
32
3m
..
.
..
.
n1
n2
..
..
.
nm
7
7
7
7
7
7:
7
7
7
7
5
The following properties of the set B1 may be studied with this matrix:
34
if and only if (at least) a column is a linear combination of the others, which means that the set
B1 is linear dependent]
The set B1 generates the environment V if and only if rank [M (B1 )]E = n (a consequence is: the
set B1 cannot generate V when the number of vectors is smaller than the environment dimension);
[the set B1 generates V if and only if the nonhomogeneous linear system [M (B1 )]E x = [v]E is a
compatible system (has at least a solution) for each possible vector v; when rank [M (B1 )]E < n,
the set B1 may be completed with at least a certain additional vector v such that the rank of the
matrix attached to the new set is strictly bigger than the rank of the old matrix, and so for such
a vector v the initial linear system is incompatible]
The set B1 is a basis if and only if n = m and rank [M (B1 )]E = n. [from the previous remarks]
When the set B1 is an ordered basis, any vector from the environment may be represented in both
perspectives: the initial perspective E and the new perspective B1 .
6
6
6
6
6
[M (B1 )]E = 6
6
6
6
6
4
11
12
1n
21
22
2n
31
32
..
.
..
.
n1
n2
3n
...
..
.
nn
7
7
7
7
7
7:
7
7
7
7
5
This matrix has as columns the representations in the old basis of the new basis vectors.
In the new basis, the vectors of the new basis B1 will be represented as:
2 3
2 3
2 3
1
0
6 7
6 7
6 0 7
6 7
6 7
6 7
6 0 7
6 1 7
6 0 7
6 7
6 7
6 7
6 7
6 7
6 7
7 ; [v2 ] = 6 0 7 ; :::; [vn ] = 6 ... 7 ;
[v1 ]B1 = 6
0
B
B
6 7
6 7
6 7
1
1
6 . 7
6 . 7
6 7
6 . 7
6 . 7
6 7
6 . 7
6 . 7
6 0 7
4 5
4 5
4 5
0
0
1
35
0
11
6
6
6
6
6
=6
6
6
6
6
4
7
6
7
6
7
6
7
6
7
6
7 ; [e2 ] = 6
0
B1
6
31 7
6
.. 7
7
6
. 7
6
5
4
0
21
7
6
7
6
7
6
7
6
7
6
7 ; :::; [en ] = 6
0
B1
6
32 7
6
.. 7
7
6
. 7
6
5
4
0
22
0
n2
0
n1
n
X
ei =
0
12
0
1n
0
2n
0
3n
..
.
0
nn
7
7
7
7
7
7;
7
7
7
7
5
0
ji vj
j=1
Denote by [M (E)]B1 the matrix with columns given by the coordinates of the old basis E in the new basis
B1 ,
[M (E)]B1
6
6
6
6
6
=6
6
6
6
6
4
0
11
0
12
0
1n
0
21
0
22
0
2n
0
31
0
32
0
3n
..
.
..
.
0
n1
0
n2
..
..
.
0
nn
7
7
7
7
7
7:
7
7
7
7
5
[x]B1
3
0
x
6 1 7
6 0 7
6 x 7
6 2 7
6
7
0 7:
=6
x
6 3 7
6 . 7
6 . 7
6 . 7
4
5
x0n
n
P
j=1
x0j vj =
n
P
i=1
n
P
j=1
n
P
j=1
x0j
x0j
n
P
i=1
!
ij ei
ij ei
n
P
j=1
n
P
i=1
n
P
j=1
x0j
n
P
x0j
i=1!
ij
ij ei
ei ;
36
but x =
xi ei , so that xi =
i=1
n
P
j=1
x0j
ij ,
1j
2j
3j
nj
7 2
7
7
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7=6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 4
7
7
5
11
12
13
21
22
23
31
32
33
..
.
..
.
..
.
n1
n2
n3
1n
:::
2n
3n
...
..
.
nn
32
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
54
x01
x02
x03
..
.
x03
7
7
7
7
7
7
7
7
7
7
7;
7
7
7
7
7
7
7
7
7
5
and so we obtain:
[x]E = [M (B1 )]E [x]B1
(The matrix [M (B1 )]E is the changeofbasis matrix from B1 to E) The changeofbasis matrix from the
new basis to the old basis has as columns the coordinates in the new basis of the vectors from the old
basis. In a similar way we have the connection
[x]B1 = [M (E)]B1 [x]E
(The matrix [M (B1 )]E is the changeofbasis matrix from E to B1 ) which means that for any vector x,
we have the equality
[M (E)]B1 [x]E = ([M (B1 )]E )
[x]E ;
which means that the matrices are equal[M (E)]B1 = [M (B1 )]E 1 .
[M (B1 )]E
B1
[M (B1 )]E 1
A diagram of the matrix connections
between two bases
37
Given two ordered bases B1 and B2 , both represented in the initial ordered basis E, passing from B1
to B2 is accomplished by pivoting on E: by using the previous formulas, we get:
[x]B1 = [M (B1 )]E 1 [x]E [x]B2 = [M (B2 )]E 1 [x]E
[x]E = [M (B2 )]E [x]B2
[M (B1 )]E 1
[M (B2 )]E
E
B2
[M (B2 )]E
38
8
>
>
a11 x1 + a12 x2 +
>
>
>
>
>
>
a21 x1 + a22 x2 +
>
>
>
>
>
<
>
>
ai1 x1 + ai2 x2 +
>
>
>
>
>
>
>
>
>
>
>
: a x + a x +
n1 1
n2 2
a1j xj +
a1m xm
b1
a2j xj +
a2m xm
b2
aij xj +
aim xm
bi
anj xj +
anm xm
bn
in which the coe cient aij is nonzero. We want to eliminate the unknown xj from all the equations
except the ith equation (in other words, the coe cients of xj will become zero in all equations except the
equation i) and for equation i we want the coe cient to become 1. In order to obtain these goals, we will
perform the following operations:
divide the equation i with aij and replace the equation i with the results;
for each equation k = 1; n, k 6= i, add the equation k with the equation i multiplied by ( akj )
and replace the equation k with the result;
After these row operations, the new system will have the unknown xj only within the ith equation.
The operations are systematized with the following rules: the element aij is called PIVOT and the
new coe cients will be obtained by using THE PIVOT RULE:
(1) The places (l; j)l=1;n l6=i (the pivot column) are becoming zero while the place (i; j) (the pivot
place) becomes 1.
aik
(the pivot line is divided by the pivot).
aij
(3) The other places (neither the line nor the column of the pivot) (k; l)k=1;m k6=j are calculated by
(2) The places (i; k)k=1;m k6=j (the pivot line) are becoming
l=1;n l6=i
using
THE RECTANGLE RULE:
column j
line i
aij
column l
ail
j
line k
akj
akl
For each place (k; l), consider the rectangle with edges (i; j), (i; l), (k; j), (k; l) (specic for each
place (k; l)); the new value for akl is obtained by the formula: the product on the pivot diagonal
minus the product on the other diagonal and everything divided with the pivot:
a0kl =
aij akl
akj ail
aij
39
where a0kl stand for the new value of the place (k; l).
All the calculations are placed in a table and, from a vector space perspective, they have the following
meaning:
The rst column from the left is holding, for each step, the names of the current basis (representation system) (including the order of the vector basis, from upthe rst one, until downthe last
one)4.
The rst line represents the names of the included vectors5.
The (numerical) columns are the coordinates of the vectors (with names from the rst line) in the
current basis (given by the rst column).
The rst table represents the initial representation, usually done in the standard basis E.
The following tables are giving the coordinates in intermediary bases
The last table gives the representations in the nal basis.
The choice of the pivot aij means that the new basis is obtained by replacing in the old basis the
vector from the line i with the vector from the column j.
4 This
5 This
v1
vj
vm
e1
ei
en
e1
a11
a1j
a1m
b1
e2
..
.
a21
..
.
a2j
..
.
a2m
..
.
0
..
.
0
..
.
0
..
.
b2
..
.
ei
..
.
ai1
..
.
aij #
..
.
aim
..
.
0
..
.
1
..
.
0
..
.
bi
..
.
en
an1
anj
anm
bn
column is a metadata column: it gives information about the names the current basis vectors and their order.
line is a metadata line which gives informations about the names of the vectors.
40
vj
vm
e1
e1
a011
a01m
e2
..
.
a021
..
.
0
..
.
a02m
..
.
0
..
.
ei
a0(i
1)1
a0(i
1)m
ei
ai1
aij
aim
aij
ei+1
..
.
a0(i+1)1
..
.
0
..
.
a0(i+1)m
..
.
0
..
.
en
a0n1
a0nm
a1j
aij
b01
a2j
aij
0
..
.
b02
..
.
..
.
a(i 1)j
aij
vj
en
1
aij
a(i+1)j
aij
..
.
anj
aij
b0i
bi
aij
0
..
.
b0i+1
..
.
b0n
The procedure may be used to nd the complete solution of a nonhomogeneous linear system
Ax = b
in the following way:
..
.
A012
b01
..
.
b02
41
(4) The last table may be written in a system form like this:
2
I
6
6 ..
6 .
4
0
which means
..
.
A012
..
.
3 2
3
2
0
7 xP
b
74
5=4 1 5
7
5 x
b02
S
8
< I xP + A xS = b0 [main equations]
1
: 0 x + 0 x = b0 [secondary equations]
P
S
2
(5) The8secondary equations are used to decide the compatibility of the system
< b0 = 0 ) compatible system
2
: b0 6= 0 ) incompatible system
2
after which the secondary equations are becoming redundant and they may be discarded;
(6) When the system is compatible, the secondary unknowns are considered parameters while the
new system, viewed as
xP = b01
A012 xS ;
becomes a Cramer system: for each possible value of the parameters, the system has a unique
solution, which is already present in the way the system is written.
(1)
(2)
(3)
8
>
4x + 3x2 + 3x3 = 14
>
>
< 1
>
>
>
:
8
>
>
>
<
>
>
>
:
8
>
>
>
<
>
>
>
:
6 7
6 7
3x1 + 2x2 + 5x3 = 13 (the solution is: 6 1 7)
4 5
2x1 + x2 + 8x3 = 13
1
2
4x1 + 3x2 + 3x3 = 6
3 + 12
6
6
(the solution is: 6 5
3x1 + 2x2 + x3 = 8
14
4
11x1 + 8x2 + 7x3 = 20
x1 + x2 + x3 = 10
2x1 + x2 + x3 = 16
(No solution)
1.6.2. Solution.
7
7
7,
5
2 R)
42
x1
x2
x3
14
13
13
1
0
0
3
4
1
4
1
2
3
4
11
4
13
2
j
j
j
j
7
2
5
2
6
j
1
11
10
11
j
1
1
2
x1
6
7 6 7
6
7 6 7
The solution may be read from the column b of the last table and it is 6 x2 7 = 6 1 7.
4
5 4 5
x3
1
In matrix form, the operations for each pivot are the following (by using elementary matrices):
0
10
1
1 0
1
3
3 7
0 0
1
B 4
C B4 3 3 14C B
4
4 2C
B 3
CB
B
1
11
5C
C
B
C B3 2 5 13C = B0
C,
1
0
B 4
C@
B
C
A
4
4
2
@ 2
A
A
@
1 13
2 1 8 13
0 1
0
6
0 4
10
1 0 2 2
1
3
3 7
1
1
3
0
1
0
9
11
B
CB
4
4 2C B
C
B
CB
1 11 5 C
C
B0
B
C
C=B
B
4
0
0
0 1
11
10C,
B
CB
C
@
A
4 4 2A
@
4 A@
1 13
0
1
0 0 1
1
0
6
2
2
2
0
10
1 0
1
1 0
9
1 0 9
11
1 0 0 2
B
CB
C B
C
B
CB
C B
C
=
B0 1 11 C B0 1
C
B
11
10
0 1 0 1C.
@
A@
A @
A
0 0 1
0 0 1
1
0 0 1 1
43
1
1
0
1 0 0 2
4 3 3 14
C
C
B
B
C
C
B
B
The matrix identity from the initial matrix B3 2 5 13C up to the nal matrix B0 1 0 1C
A
A
@
@
0 0 1 1
2 1 8 13
0
1
1
0
0
1
1
0
1
0 0
1 0
9 B1 3 0C B
4
3
3
14
CB
B
CB
C
C B 43
CB
B
CB
C
C
C
B
(by using elementary matrices) is: B0 1 11 C B0
B
4 0C B
1 0C 3 2 5 13C =
@
A@
A
@
4
A
4 A@ 2
0
1
0 0 1
2 1 8 13
0 1
2
4
0
1
1 0 0 2
B
C
B
C
B0 1 0 1C, or with
@
A
0 0 1 1
10
1 0
0
10
1
1
0 0
1 0
9 B1 3 0 C B
11
21
9
C B
B
CB
C
C B 43
C B
B
CB
C
C
B
C
=
B0 1 11 C B0
4 0C B
1 0C B 14
26 11 C and
@
A@
@
A
4
A
4 A@ 2
0
1
0 0 1
1
2 1
0 1
2
41
0
0
1
0
11 21
9
4 3 3
B
B
C
C
B
B
C
C
B 14
26 11 C = B3 2 5C,
@
@
A
A
1
2 1
2 1 8
in
0 the form:
10
1 0
4 3
4 3 3
1 0 0 2
B
CB
C B
B
CB
C B
B3 2 5C B0 1 0 1C = B3 2
@
A@
A @
2 1
2 1 8
0 0 1 1
[this is a factorization also known as
1
3 14
C
C
5 13C
A
8 13
LU decomposition, see [37], Section 2.6, page 83]
44
x1
x2
x3
e1
e2
e3
e1
4 #
14
e2
13
e3
13
3
4
1
- #
4
1
2
3
4
11
4
13
2
j
x1
e2
e3
j
j
j
j
7
2
5
2
6
j
j
j
j
1
4
3
4
1
2
x1
x2
11
10
e3
1 #
11
x1
x2
14
26 11
x3
11
21
In
pivot are (by using elementary1matrices):
0 matrix form,
1 0 the operations for each
1 0
1
3
3 7 1
0 0
0 0
1
B 4
C B4 3 3 14 1 0 0C B
C
4
4 2 4
B 3
CB
B
C
1
11
5
3
C
B
C B3 2 5 13 0 1 0C = B0
C
1
0
1
0
B 4
C@
B
C
A
4
4
2
4
@ 2
A
A
@
1 13
1
2 1 8 13 0 0 1
0 1
0
6
0 1
2
0 4
10
1 02 2
1
3
3 7 1
1
0 0
1
3
0
1
0
9
11
2
3
0
B
CB
C B
4
4 2 4
C
B
CB
C B
1 11 5
3
C
B
B0
C
C = B0 1
4
0
0
1
0
11
10 3
4 0C
B
CB
C
@
A
4 4 2
4
@
A
4 A@
1 13
1
0
1
0 0 1
1
1
2 1
0
6
0 1
2
2
2
2
0
10
1 0
1
1 0
9
1 0 9
11
2 3 0
1 0 0 2
11 21
9
B
CB
C B
C
B
CB
C B
C
=
B0 1 11 C B0 1
C
B
11
10 3
4 0
0 1 0 1 14
26 11 C
@
A@
A @
A
0 0 1
0 0 1
1
1
2 1
0 0 1 1 1
2 1
45
1
4 3 3 14 1 0 0
C
B
C
B
The matrix identity starting from the initial matrix B3 2 5 13 0 1 0C and ending with
A
@
2 1 8 13 0 0 1
0
1
1 0 0 2
11 21
9
B
C
B
C
the nal matrix B0 1 0 1 14
26 11 C (by using elementary matrices) is:
@
A
0 0 1 1 1
2 1
10
10
1
0
10
1
0 0
4
3
3
14
1
0
0
1 0
9 B1 3 0 C B
CB
C
B
CB
C B 43
CB
C
B
CB
C
B
C
B0 1 11 C B0
1 0C B3 2 5 13 0 1 0C =
4 0C B
A
@
A@
4
A@
4 A@ 2
0
1
2 1 8 13 0 0 1
0 0 1
0 1
2
41
0
1 0 0 2
11 21
9
B
C
B
C
= B0 1 0 1 14
26 11 C
@
A
0 0 1 1 1
2 1
0
10
1 0
10
1
0 0
1 0
9 B1 3 0 C B
C B
4
B
CB
B
C
C B
3
B
CB
B
C=B
By using B0 1 11 C B0
4 0C
1
0
CB 4
C @
@
A@
A
4 A@ 2
0
1
0 0 1
0 1
2
4
1
0
1 1 0
4 3 3
11 21
9
C
B
B
C
C
B
B
C
=
B
B 14
C
3 2 5C
26 11
A
@
@
A
2 1 8
1
2 1
the corresponding LUdecomposition is:
0
10
1 0
4 3 3
1 0 0 2
11 21
9
4 3 3
B
CB
C B
B
CB
C B
B3 2 5C B0 1 0 1 14
26 11 C = B3 2 5
@
A@
A @
2 1 8
2 1 8
0 0 1 1 1
2 1
Various verications from the table:
0
4
B
B
B3
@
2
0
4
B
B
B3
@
2
0
4
B
B
B3
@
2
3
2
1
3
2
1
3
2
1
10
1 0
1
3
11 21
9
1 0 0
CB
C B
C
CB
C B
C
=
C
B
C
B
5
14
26 11
0 1 0C
A@
A @
A
8
1
2 1
0 0 1
10 1 0 1
3
2
14
CB C B C
CB C B C
5C B1C = B13C
A@ A @ A
8
1
13
10
1 0 1
3
11
1
CB
C B C
CB
C B C
5C B 14 C = B0C
A@
A @ A
8
1
0
11
14
1
1
9
C
C
26 11 C and
A
2 1
21
1
14 1 0 0
C
C
13 0 1 0C
A
13 0 0 1
46
1 0 1
10
0
21
4 3 3
C B C
CB
B
C B C
CB
B
B3 2 5C B 26C = B1C
A @ A
A@
@
0
2
2 1 8
0
10 1 0 1
4 3 3
9
0
B
CB C B C
B
CB C B C
B3 2 5C B 11 C = B0C
@
A@ A @ A
2 1 8
1
1
0
(2) The pivot method for this system has the following form:
x1
x2
x3
11
20
3
4
1
4
1
4
3
4
5
4
5
4
1
0
0
j
j
j
j
3
2
7
2
7
2
j
3
12
14
The procedure cannot be continued anymore since the last line from the last table (corresponding with the unknowns) is zero. Because the element corresponding with the column b is
also zero, the system is compatible and 1undetermined (it has 2 main unknowns and 1 secondary
unknown).
82
>
> 12 + 3
6
7 >
<6
6
7
6
The complete solution of the system is 6 x2 7 2 6 14 5
4
5 >
4
>
>
:
x3
2
x1
7
7
7;
5
9
>
>
>
=
2R
>
>
>
;
In matrix form, the operations for pivot are the following (by using elementary matrices):
0
B
B
B
B
@
0
B
B
B
@
10
1 0
1
0 0
1
CB 4 3 3 6 C B
4
C
B
3
B
C
B 3 2 1 8 C=B
1 0 C
0
C
@
A B
4
A
@
11
11 8 7 20
0 1
0
4
1
0
1
0
3 3
3
1 3 0 B 1
1
4
4 2 C
CB
B
C
5
7
1
C
C=B
B 0
0
0
4 0 CB
AB
@
4
4 2 C
A
@
1
5 7
0
1 1
0
0
4
4 2
3
4
1
4
1
4
3
4
5
4
5
4
47
3
2
7
2
7
2
C
C
C
C
A
12
C
C
14 C
A
0
The matrix identity from the initial matrix up to the nal matrix (by using elementary matrices) is:
B
B
B 0
@
0
10
3 0 B
CB
C
4 0 CB
AB
@
1 1
10
1
0 0
CB 4 3 3 6
4
CB
3
1 0 C
3 2 1 8
CB
@
4
A
11
11 8 7 20
0 1
4
1 0
10 1
0 0
1 3 0 B
C B
B
CB 4
C B
3
B
CB
By using B 0
1 0 C
4 0 CB
C=B
@
A@
4
A @
11
0
1 1
0 1
4
0
1 1 0
1
2 3 0
4 3 0
B
C
B
C
B
C
B
C
=
B 3
C
B
4 0
3 2 0 C,
@
A
@
A
2
1 1
11 8 1
the attached LUdecomposition is:
0
10
1 0
4 3 0
1 0
3 12
4 3
B
CB
C B
B
CB
C B
B 3 2 0 CB 0 1 5
14 C = B 3 2
@
A@
A @
11 8 1
0 0 0
0
11 8
(3) The pivot method applied to this system is:
0
1 0
C B
C B
C=B 0 1
A @
0 0
5
0
C
C
4 0 C and
A
1 1
3
2
C
C
1 8 C.
A
7 20
12
C
C
14 C;
A
0
48
x1
x2
x3
10
16
24
j
1
10
j
1
1
B
B
B 0
@
0
3 0
1
1
1
1 1
1 10
1 1 1 10
CB
C B
C
CB
C B
C
1
1
4 C
0 C B 2 1 1 16 C = B 0
A@
A @
A
1
3 2 2 24
0
1
1
6
1
10
1 0
1 0 0 6
0
1 1
1 10
C
CB
C B
C
CB
C B
0 CB 0
1
1
4 C=B 0 1 1 4 C
A
A@
A @
0 0 0
2
1
0
1
1
6
0 0
2 1
10
The matrix identity from the initial matrix up to the nal matrix is:
0
1
B
B
B 0
@
0
10
CB
CB
1 0 CB
A@
1 1
0 0
10
1 1 1 10
CB
CB
2 1 0 C B 2 1 1 16
A@
3 0 1
3 2 2 24
By using
0
10
1 0
1 1 0
1 0 0
1
B
CB
C B
C B
B
CB
B 0
1 0 CB 2 1 0 C = B 2
@
A@
A @
0
1 1
3 0 1
1
0
1 1 0
1
1 1 0
1 1 0
B
C
B
C
B
C
B
C
B 2
1 0 C = B 2 1 0 C,
@
A
@
A
3 2 1
1
1 1
the LUdecomposition is:
1 0 0
C B
C B
C=B 0 1 1
A @
0 0 0
0
C
C
1 0 C and
A
1 1
C
C
4 C
A
2
1 1 0
B
B
B 2 1 0
@
3 2 1
10
1 0 0
1 1 1 10
49
C
C B
C
C B
=
C
B
2 1 1 16 C.
4
A
A @
3 2 2 24
2
CB
CB
CB 0 1 1
A@
0 0 0
The matrix inverse may be obtained by using the same procedure, with the corresponding tables in
the form:
A
A
0
2 3
B
B
1.6.3. Example. Find the inverse of the matrix B 1 2
@
1 1
C
C
1 C.
A
2
1
0
0
3
2
1
2
1
2
1
2
1
2
3
2
j
j
j
j
1
2
1
2
1
2
j
1
3
5 1
2
2 2
1 3
1
0
1
0
j
2 2
2
1
1
1
0
0
1
j
2
2
2
From
the
last
three
columns
and
the
last
0
1 three lines we read the inverse matrix:
0
1 1
3
5 1
2 3
1
B 2
C
B
C
B 1 32 21 C
B
C
C
B 1 2
1 C =B
B 2 2
@
A
2 C
@ 1
1
1 A
1 1
2
2
2
2
In matrix form, the pivot operations are (with elementary matrices):
1
50
10
3
2
1
2
1
2
1
C B
B
C
0
2
1 0 1 0 C=B
A B
@
1
2 0 0 1
0
1 0
1 1
3
0 0
C B 1 0
2
2 2
C B
1
1
1
1 0 C
0 1
C=B
@
2
2
2
A
1
3
1
0 0
0 1
2
2
2
0
1
1 0
0 1
2
3 0
C B
B
C
0 1
1
1
1 2 0 C=B
A B
@
0
2
1 1 1
0 0
1
0
0
1
5 1
3
1 B
1
2 2 C
B
CB 2
C
1 3
1 C B
C
=B 0
1 CB
@
AB
2 C
A
@ 12 21
1
2
0
2
2
2
The LUdecomposition
in this situation is: 1
0
0
10
5 1
3
2 3
2 3
1 B 1 0 0
2
2 2 C
B
B
CB
C
1 C B
1 3
B
C
=B 1 2
B 1 2
0 1 0
1 CB
B
@
@
A@
2 2
2 C
A
1
1
1
1 1
1 1
2
0 0 1
2
2
2
0
10
10
1 0
2
CB
CB
1=2 1 0 C B 1
A@
1=2 0 1
1
0
0
1
1
3 0 B 1
B
CB
B
C
B 0 2 0 CB
0
@
AB
@
0 1 1
0
0
10
1 0 1=2
1
B
CB
B
CB
B 0 1
1=2 C B 0
@
A@
0 0
1=2
0
0
2 3
B
B
Verication: B 1 2
@
1 1
B
B
B
@
1=2
0 0
1 1 0 0
1 0 1=2
1
3 0
B
CB
B
CB
B 0 1
1=2 C B 0 2 0
@
A@
0 0
1=2
0 1 1
0
1 1 0
3
5 1
2
B 2
C
B
B 1 32 21 C
B
C =B
B 1
B 2 2
@
2 C
@ 1
A
1
1
1
2
2
2
CB
CB
CB
A@
3
2
1
1=2
0 0
C B
C B
1=2 1 0 C = B
A B
@
1=2 0 1
1
1
1
0 0
C
2
C
1
1 0 C
C
2
A
1
0 1
2
1
1
2
3 0
C
C
1
1 2 0 C
A
2
1 1 1
1
5 1
3
0
2
2 2 C
1 C
1 3
C
0
2 2
2 C
1
1
1 A
1
21 2
2
0 0
C
C
1 0 C
A
0 1
1
2
1
2
3
2
1 1 0 0
C
C
1 0 1 0 C, with
A
2 0 0 1
1
3
5 1
2
2 2 C
1 3
1 C
C and
2 2
2 C
1
1 A
1
2
2
2
C
C
1 C.
A
2
By using the pivot method and similar tables for calculations most of the required Linear Algebra
calculations may be obtained (as well as obtain them with a computer), in the same time keeping track of
the perspective interpretations in terms of bases. The theoretical framework is given by
1.6.5. Theorem. (The substitution Lemma) Consider a nitetype vector space (V; K), V1 the subm
P
space generated by the ordered linear independent set B = (e1 ;
; em ), and v =
i ei 2 V1 . If
i=1
6= 0 then B 0 = (e1 ;
ej 1 ; v; ej+1 ;
6
6 .
old [x]B = 6 ..
4
j
0
i
0
1
7
6
7
7
6 . 7
7 and new [x]B 0 = 6 .. 7 coordinates are given by of an arbitrary vector x 2 V1 is
5
4
5
0
m
0
j
51
i j
6= 0.
Proof. As
6= 0, from x =
x
m
P
i ei
i=1
m
P
i
j
i=1;i6=j
8
<
such that
m
P
= 0 (si
6= 0)
i + j i = 0; i 6= j
0
) B is a basis in V1
m
m
P
P
Moreover, x =
e
=
i i
jv
i=1;i6=j
i=1
i ei
=0)
i ei
m
P
m
P
i ei +
i
i
j
j
; i 6= j;
j
j
0
j
m
P
i=1;i6=j
m
P
=0 )
i ) ei
i
j
i=1;i6=j
ei
m
P
i
j
i=1;i6=j
1.6.6. Example. Study the nature of the vector set fv1 ; v2 ; v3 ; v4 g, with vectors:
2
6
6
v1 = 6
4
2 3
2 3
2 3
2
1
1
7
6 7
6 7
6 7
6 7
6 7
6 7
7
0 7 ; v2 = 617 ; v3 = 617 ; v4 = 6 2 7
5
4 5
4 5
4 5
1
3
1
0
1
2 v2
3 v3
4 v4
1,
2,
3,
4:
= 0.
1,
j ej
i=1;i6=j
= 0, 8i
i ei + j ej =
0
i
i ei
i=1
i=1;i6=j
i=1;i6=j
m
P
2,
3,
By replacing in the vector equation the coordinates of the vectors (in the standard basis) we get:
[v1 ]E + 2 [v2 ]E + 3 [v3 ]E + 4 [v4 ]E = [0]E )
2 3
2 3
2 3
2 3 2 3
1
2
1
1
0
6 7
6 7
6 7
6 7 6 7
6 7
6 7
6 7
6 7 6 7
) 1 6 0 7 + 2 617 + 3 617 + 4 6 2 7 = 6 0 7,
4 5
4 5
4 5
4 5 4 5
3
1
0
0
1
which is the same with the linear homogeneous system (with unknowns
1
1,
2,
3,
4 ):
ei +
52
8
>
>
>
<
>
>
>
:
+2
+2
2
3
=0
=0
+3 2+ 3 =0
Use the pivot method to solve the system and to keep the bases interpretations:
1
v1
v2
v3
v4
j
1 #
e2
e3
e2
e3
e3
1 #
1
2
1
v1
v2
3 #
e3
e2
j
v1
e1
e1
11
2
1
5
2
1
1
4
j
j
0
3
3
3
3
5
1
2 1
v2
j
0
1
0
j
j
0
3
3
3 3
11
1 5
1
v3
j
0
0
1
j
j
0
3
3 3
3
In matrix form, the pivot operations are the following (with elementary matrices):
0
10
1 0
1
1 0 0
1 2 1
1 1 0 0
1 2 1
1 1 0 0
B
CB
C B
C
CB
C B
C
B
B 0 1 0 CB 0 1 1 2 0 1 0 C = B 0 1 1 2 0 1 0 C
@
A@
A @
A
1 0 1
1 3 1 0 0 0 1
0 5 2
1 1 0 1
0
10
1 0
1
1
2 0
1 2 1
1 1 0 0
1 0
1
5 1
2 0
B
CB
C B
C
B
CB
C B
C
B 0 1 0 CB 0 1 1 2 0 1 0 C = B 0 1 1
2 0 1 0 C
@
A@
A @
A
0
5 1
0 5 2
1 1 0 1
0 0
3
11 1
5 1
0
10
0
1
1
4 2
1
1
1 0 0
1 0
1
0
1
5
1
2
0
B
3 C
3 3
3
3
C B
B
CB
5 1
2 1
C B
B 0 1 1 CB
B
B 0 1 1
2 0 1 0 C=B 0 1 0
B
@
A @
3 C
3 3
3 3
@
A
1
11
1 5
1
0 0
3
11 1
5 1
0 0
0 0 1
3
3
3 3
3
Verication:
v1
1
C
C
C
C
A
53
1 0
1
2
1
1
1
0
0
3
3
3 C B
B
C
1
2 1 C
B
B
C
C
= B 0 1 0 C.
B
C
@
A
3
3 3 A @
1 5
1
0 0 1
3 3
3
Conclusion: the set fv1 ; v2 ; v3 ; v4 g is linear dependent, with the subset fv1 ; v2 ; v3 g linear independent.
0
10
1 2 1 B
CB
C
0 1 1 CB
AB
@
1 3 1
2 v2
(m) +
3 v3
1,
2,
(m) = 0.
1.6.10. Remark. The current presentation of the pivot method is far from complete:
the situation when the pivot cannot be taken from the main diagonal has not been covered,
numerical aspects (and approximate results) of the procedure are not covered,
obtaining the results by using software products and big/huge examples are not covered
other applications (in other disciplines) are not covered.
We hope that other texts will be able to cover these aspects.
1.6.11. Example (Economic Theory Application adapted from [[8]], page 108, Example 1). An American rm has a gross prot in amount of 100000 USD. The rm accepts to donate 10% from the net prot
for The Red Cross. The rm has to pay a state tax of 5% from the prot (after donation) and a federal
tax of 40% from the prot (after the donation and after the state tax).
Which are the taxes and how much does the rm donate?
What is the real value of the donation?
1.6.12. Solution. Denote by D the donation, by S the state tax and by F the federal tax.
The net prot is 100000 S F ;
1
(100000 S F ) ) 10D + S + F = 100000;
D=
10
5
S=
(100000 D) ) D + 20S = 100000;
100
40
F =
(100000 D S) ) 2D + 2S + 5F = 200000 )
100
Solve the linear system:
54
8
8
11400000
>
>
>
F =
35736:67712;
10D + S + F = 100000
>
>
>
>
319
<
<
1500000
, the solution is:
S=
4702:194357;
D + 20S = 100000
>
>
319
>
>
>
>
>
:
: D = 1900000 5956:112853:
2D + 2S + 5F = 200000
319
When there is no donation, the taxes would be bigger:
5
D=0)S=
100000 = 5000, and
100
40
(100000 5000) = 38000
F =
100
The dierence between the taxes without donation and the taxes with donation is:
1500000 11400000
817000
5000 + 38000
=
2561:128527
319
319
319
The real value of the donation is the dierence between the donation made and the tax excess when
the donation is absent:
1900000 817000
1083000
=
= 3394:984326.
319
319
319
55
V is a vector
subspace, then:
(1) (V0 ; K) is also of nitetype;
(2) 8B0 basis for V0 , 9B basis for V such that B0
dim V;
i2I
Vi we
8i 2 I, 0 2 Vi ) 0 2 V0 () V0 6= ;);
1. x, y 2 V0 ) 8i 2 I x, y 2 Vi ) 8i 2 I, x + y 2 Vi ) x + y 2 V0
2. x 2 V0 ,
2 K ) 8i 2 I, x 2 Vi and
2 K ) 8i 2 I, x 2 Vi ) x 2 V0 .
1.7.3. Example. Consider the vectors: v1 = (2; 2; 2), v2 = (2; 2; 1), v3 = ( 3; 3; 2), v4 = (2; 5; 1) and
the subspaces V1 = span (fv1 ; v2 g), V2 = span (fv3 ; v4 g).
Describe the intersection V1 \ V2 .
1.7.4. Solution. v 2 V1 \ V2 ) v is simultaneously a linear combination of v1 , v2 and of v3 , v4 so we get
the vector relation:
v=
1 v1
[v]E =
2 v2
4 v4 ,
[v2 ]E = 3 [v3 ]E +
2 3
2
2
6 7
6
6 7
6
By replacing [v1 ]E = 6 2 7, [v2 ]E = 6
4 5
4
2
we obtain the system:
8
>
2 1+2 2 = 3 3+2 4
>
>
<
2 1 + 2 2 = 3 3 + 5 4 , with the
>
>
>
:
2 1
2 = 2 3+ 4
1
[v1 ]E +
3 v3
We get [v]E =
[v1 ]E +
[v2 ]E =
7
12
[v4 ]E .
3
2
2
3
7
6
7
6
2 7, [v3 ]E = 6 3
5
4
1
2
solution:
2
6 7
6 7
6 2 7+
4 5
2
8
>
>
>
>
>
>
>
<
>
>
>
>
>
>
>
:
7
6
7
12
7
6
1
2
2 R:
2
3
2
6
6
7
6
6
7
6 2 7= 6
6
4
5
4
1
2
7
6 7
7
6 7
7, [v4 ]E = 6 5 7,
5
4 5
1
7
2
7
2
0
3
7
7
7
7
5
56
7 7
; ;0
2 2
7
12
7 7
; ;0
2 2
or v =
1
2
( 3; 3; 2)+
7 7
; ;0 ;
2 2
2R
= f (1; 1; 0) ;
2 Rg :
Remarks:
8
>
2 1+2 2 = 3
>
>
<
The system
2 1 + 2 2 = 3 has no solution, so that v3 62 V1 .
>
>
>
:
2 1
2 = 2
8
>
2 1+2 2 =2
>
>
<
The system
2 1 + 2 2 = 5 has no solution, so that v4 62 V1 .
>
>
>
:
2 1
2 = 1
8
>
2= 3 3+2 4
>
>
<
The system
2 = 3 3 + 5 4 has no solution, so that v1 62 V2 .
>
>
>
:
2=2 3+ 4
8
>
2= 3 3+2 4
>
>
<
The system
2 = 3 3 + 5 4 has no solution, so that v2 62 V2 .
>
>
>
:
1=2 3+ 4
Because there is no basis mentioned, we assume that the initial representation is done by using the
standard ordered basis E = (e1 ; e2 ; e3 ), where e1 = (1; 0; 0), e2 = (0; 1; 0), e3 = (0; 0; 1).
In the ordered basis E the objects are represented as following:
2 3
2 3
2 3
2 3
2
3
2
3
1
0
0
2
2
3
6 7
6 7
6 7
6 7
6
6
7
7
6 7
6 7
6 7
6 7
6
6
7
7
[e1 ]E = 6 0 7, [e2 ]E = 6 1 7, [e3 ]E = 6 0 7, [v1 ]E = 6 2 7, [v2 ]E = 6 2 7, [v3 ]E = 6 3 7,
4 5
4 5
4 5
4 5
4
5
4
5
0
0
1
2
1
2
2 3
2 3
7
2
6 7
6 2 7
6 7
6 7
[v4 ]E = 6 5 7, [v]E = 6 72 7.
4 5
4 5
1
0
2
3
2 2
6
7
6
7
The set B1 = (v1 ; v2 ) is linear independent (because the matrix 6 2 2 7 has rank 2) and generates
4
5
2
1
V1 , so that B1 is a basis for V1 , while the dimension of V1 is 2, and the objects within V1 are represented
like this:
57
[v1 ]B1 = 4
5, [v2 ] = 4
B1
1
0
in B1 because it is from the intersection).
2
4
7
12
7
6
The set B2 = (v3 ; v4 ) is a basis in V2 , the dimension of V2 is 2, and the objects from V2 are represented
as:
[v3 ]B2 = 4
5, [v4 ] = 4
B2
0
1
in B2 because it is from the intersection).
2
4
1
2
In the picture there are the following objects (in terms of both Linear Algebra and Analytic Geometry):
The points:
O (0; 0; 0), P 1 (2; 2; 2), P 2 (2; 2; 1), P 3 ( 3; 3; 2), P 4 (2; 5; 1);
The position vectors:
!
!
!
!
OP 1, OP 2, OP 3, OP 4 (the vectors v1 , v2 , v3 , v4 );
The planes:
(P L1) :
x + y = 0 (the subspace V1 ),
(P L2) :
x+y
3z = 0 (the subspace V2 );
the intersecting line for the planes (P L1) and (P L2), (P L12): x = (0; 0; 0) + s (1; 1; 0) (the subspace
V1 \ V2 = f (1; 1; 0) ;
2 Rg, with dimension 1). The picture was obtained with the software product
58
Def
Vi =
i=1
( k
X
i=1
vi ; vi 2 Vi ; 8i = 1; k
Proof. xj 2
k
P
i=1
(vi1 + vi2 ) 2
k
P
i=1
Vi .
i=1
Similar, for
k
P
2 K; x1 =
i=1
( vi1 ) 2
k
P
i=1
k
P
i=1
vij ) x1 + x2 =
Vi .
i=1
and it is the smallest space which contains the union (in terms of inclusion).
k
P
Proof. Consider x 2
i=1
Vi ) 8i 2 I, 9vi 2 Vi
m
P
j vj .
k
S
i=1
Vi
k
S
i=1
Vi , such that x =
) 9m 2 N , 9
k
P
i=1
2 K, j = 1; m, 9vj 2
j=1
j u ij
vi ) x 2 span
2 Vij an then x =
m
P
k
S
i=1
j u ij
j=1
k
S
i=1
Vi .
Vi , j = 1; m, such
i2I
Vi [for two
dierent indices or more j1 , j2 it may happen that the indices ij1 and ij2 are the same and in this
situation
j1 uij1
j2 uij2
2 Vij1 ].
dim
k
P
i=1
Vi
k
P
k
P
i=1
dim
k
P
i=1
dim (Vi ).
i=1
Vi .
k
P
i=1
Vi ) 8i = 1; k; dim Vi
dim
k
P
i=1
For the second inequality, consider a basis for each space and their union; the union generates the set
k
P
Vi and the number of its vectors is smaller than
dim (Vi );
i=1
because any basis has at most as many vectors as a generating set, we get that dim
k
P
i=1
k
P
Vi
dim (Vi ).
Vi
i=1
1.7.9. Proposition. (Equivalent denitions for the direct sum of two subspaces) Consider two
subspaces V1 and V2 , and their sum V1 + V2 in the vector space (V; K). The following statements are
equivalent:
59
(9x 2 V1 \ V2 ; x 6= 0V ).
Consider v 2 V1 + V2 and x 2 V1 \ V2 , x 6= 0V .
v 2 V1 + V2 ) 9v1 2 V1 , v2 2 V2 , such that v = v1 + v2 .
But since x 2 V1 \V2 , v = (v1
x)+(v2 + x)
unique]
So (non(2:) ) non(1:))
(1: ) 2:)
2.)1.
By contradiction:
If the decomposition is not unique, then 9u1 ; v1 2 V1 , 9u2 ; v2 2 V2 , such that u1 6= v1 or u2 6= v2 and
v = v1 + v2 = u1 + u2 .
Then 0V 6= v1
u1 = v2
u2 2 V1 \ V2 ) (9x (= v1
u1 = v2
u2 ) 2 V1 \ V2 ; x 6= 0V )
(2: ) 1:)
V1 , (fj )j=1;k2
V2 in each subspace.
)
Assume that V1 \ V2 = f0V g.
The union set (ei )i=1;k1 [ (fj )j=1;k2 is a basis for V1 + V2 :
The set (ei )i=1;k1 [ (fj )j=1;k2 generates V1 + V2 , because
span (ei )i=1;k1 = V1 and span (fj )j=1;k2 = V2 )
) span (ei )i=1;k1 [ (fj )j=1;k2
j=1
60
)
)
)
k1
P
i=1
k1
P
i=1
k2
P
i ei
k2
P
j=1
i ei
j fj
j=1
j fj
2 V1 \ V2 = f0V g
= 0V ) 8i = 1; k1 ,
= 0V ) 8j = 1; k2 ,
If B1 \ B2 6= ;, then jB1 [ B2 j < jB1 j + jB2 j ) dim (V1 + V2 ) < dim V1 + dim V2 contradiction, so
B1 \ B2 = ;.
If B1 \ B2 = ; but the set B1 [ B2 is not linear independent, then dim (V1 + V2 ) < jB1 [ B2 j =
jB1 j + jB2 j, which is again a contradiction.
So for any basis B1 of V1 , and for any basis B2 of V2 , the set B1 [ B2 is a basis for V1 + V2 .
1.7.10. Denition. The sum V1 + V2 of two subspaces V1 and V2 is called direct sum when any condition
from the proposition 1.7.9 takes place. The direct sum of two subspaces is denoted by V1
V2 .
[the direct sum concept may be seen as a generalization of linear independence; the direct sum may be
seen as the linear independence of a set of subspaces]
1.7.11. Denition. Consider the subspace V2 in (V; K). If V1 is another subspace of (V; K) such that
V = V1
6 The
terminology used is dierent for various existing schools. The AngloSaxon school uses the term "complementary
subspaces", while the French school uses the term "sousespaces supplmentaires"; moreover, the dierences remain over the
61
V2 .
i=1
k1
P
i ei
k2
P
j fj
= 0,
j=1
i=1
and since the above expression is a null linear combination of the set (ei )i=1;k1 [ (fj )j=1;k2 which is
a basis, all the scalars are zero, which means that any element of the intersection is null, which in turn
means that the sum of the subspaces is direct.
1.7.13. Remark. It may be observed from the proof that, since a linear independent set may be completed
in many ways up to a basis, the complement (of a proper subspace) is not unique7.
1.7.14. Example. If V1 = f (1; 0) ;
R2 with each of the vectors (0; 1), (1; 1), (1; 1) so that each of the subspaces V2 = f (0; 1) ;
V3 = f (1; 1) ;
V1
2 Rg, V4 = f (1; 1) ;
2 Rg is a complement of V1 in R2 : R2 = V1
V2 = V1
2 Rg,
V3 =
V4 .
1.7.15. Remark. If V1 is a subspace in V such that dim V1 = k and dim V = n, then any complement of
V1 in V has dimension n
1.7.16. Theorem. [Equivalent denitions for the direct sum of more than two subspaces]
k
P
Consider in (V; K) k subspaces (Vi )i=1;k such that V =
Vi . The following statements are equivalent:
i=1
k
P
k
P
i=1
dim Vi = dim V.
k
S
Bi is linear independent.
i=1
translations (an English text translated from French uses the term "supplementary subspaces"). When we consider other
schools (such as the Russian school or the German school) and related translations, we nd a certain ambiguity even in the
same language.
7 The vector space (V; K) has two improper subspaces: f0g (the null subspace) and V (the whole space). Each improper
subspace has a unique complement, namely the other improper subspace.
62
e2 )e1
k
P
Vi
Suppose by contradiction that there is an index j 2 1; k such that Vj \
i=1;i6=j
!
k
k
P
P
Vi and x 6= 0.
Vi n f0g ) x 2 Vj and x 2
Then 9x 2 Vj \
6= f0g.
i=1;i6=j
i=1;i6=j
k
P
Vi we have:
vi0 , vi 2 Vi , i 6= j and for an arbitrary vector v 2
i=1
i=1;i6=j
!
!
k
k
k
k
k
k
P
P
P
P
P
P
v=
vi =
vi +vj =
vi x +(vj + x) =
vi
vi0 +(vj + x) =
(vi
Then x =
i=1
k
P
i=1;i6=j
i=1;i6=j
i=1;i6=j
i=1;i6=j
vi0 )+
i=1;i6=j
(vj + x), which is another decomposition for x, which is distinct from the rst one because x 6= 0, which
is a contradiction with the unicity of the decomposition.
2 ) 1 e1 )e2
If an element has two decompositions,
k
P
vi =
i=1
k
P
i=1
vi0 , then
vj =
(vi
i=1
k
P
(vi
i=1;i6=j
k
P
vi0 ) = 0;
vi0 ) 6= 0 ) 9j; Vj \
k
P
i=1;i6=j
Vi
6=
2)3
Consider a basis Bi for each subspace Vi . The union of these bases
k
S
i=1
k
S
Bi is independent, which is 3.
i=1
3)2
k
S
Bi is assumed to be linear
i=1
i=1;i6=j
j=1;j6=j0
i=1
)v=
k
P
j=1
kj
P
i=1
j j
i ei
kj0
P
j0 j0
i ei +
i=1
k
P
j=1;j6=j0
kj
P
i=1
j j
i ei
kj0
P
i=1
k
S
j0
i
63
j0
i
eji 0 +
k
P
j=1;j6=j0
Bi .
kj
P
i=1
j
i
j
i
eji ,
i=1
3.)4.
If for i = 1; k, 8Bi basis in Vi , the set
1
0
k
S
8j = 1; k, Bj \ @ Bi A = ;
k
S
i=1
i=1
i6=j
because otherwise, by altering the basis Bj (such that the above condition is satised), we would be
able to obtain alternative unions which are linear independent but with dierent numbers of elements,
which is a contradiction with the fact that each basis has the same number of elements.
It follows that all the possible intersections of the sets Bi are void, which means that
k
k
k
k
P
P
P
S
jBi j ) dim
Vi =
dim Vi .
Bi =
i=1
i=1
i=1
i=1
4.)3.
k
P
i=1
The union
k
S
Vi
Bi generates
i=1
k
P
i=1
k
P
Vi ; if the set
i=1
k
P
i=1
k
S
i=1
k
P
i=1
Vi will
1.7.17. Denition. The sum of a set of subsets (Vi )i=1;k is called direct when any condition from the
Theorem 1.7.16 is satised.
The notation used for this special summation is
k
L
i=1
is direct).
k
P
i=1
Vi of the subspaces
1.7.18. Theorem. (The Grassmann Formula) For any two subspaces V1 and V2 of (V; K) we have:
dim V1 + dim V2 = dim (V1 + V2 ) + dim (V1 \ V2 ) :
Proof. Consider
V01 a complement of V1 \ V2 in V1 : V1 = (V1 \ V2 )
V02 a complement of V1 \ V2 in V2 : V2 = (V1 \ V2 )
It
8 follows:
>
V01 \ V2 = V1 \ V01 \ V2 = (V1 \ V2 ) \ V01 = f0g
>
>
| {z }
>
<
0
=V1
>
>
V02 \ V1 = V2 \ V02 \ V1 = (V1 \ V2 ) \ V02 = f0g :
>
>
| {z }
:
=V02
8
< (V1 \ V2 ) \ V0 = f0g;
1
0
V1 )
: V0
V1 :
1
8
< (V1 \ V2 ) \ V0 = f0g;
2
V02 )
: V0
V:
2
64
We show that the sum from the righthand part is direct: V1 +V2 = (V1 \ V2 )+V01 +V02 = (V1 \ V2 )
V01
V1 and
u2 2 V2 ) u1 2 V1 \ V2 \ V01 = f0g ) u1 = 0;
similar, u2 = 0 and so x = 0.
V01 \ ((V1 \ V2 ) + V02 ) = V01 \ V2 = f0g.
V02 \ ((V1 \ V2 ) + V01 ) = V02 \ V1 = f0g.
So the sum (V1 \ V2 ) + V01 + V02 is direct which means that the following relation between dimensions
takes place:
dim (V1 + V2 ) = dim (V1 \ V2 ) + dim V01 + dim V02 ;
because dim V0i = dim Vi
dim (V1 \ V2 ) ;
P (V).
65
W2 and W2
W1 ) W1 = W2
W2 and W2
W3 ) W1
W3 .
W2 ) ) W2
W2 () W1 \ W2 = W1
W2 () W1 + W2 = W2
8W0 2SL(V) , 8W2SL(V) , the function W 7! W0 \ W is increasing (nonstrictly) (isotone) [when W1 and W2
are comparable and W1
W2 \ W0 ]
8W0 2SL(V) , 8W2SL(V) , the function W 7! W0 + W is increasing (isotone) [if W1 and W2 are comparable
and W1
W2 , then, for each W0 , W1 + W0 and W2 + W0 are also comparable and the inclusion relation
W2 + W0 ]
W1 \ (W2 + W3 )
66
W1 + (W2 \ W3 )
(W1 + W2 ) \ (W1 + W3 )
Proof:
" "
W1
W3
and
W1
W1 + W2
and
W2 \ W3
9
>
>
>
=
>
>
>
;
W2
and
W2 \ W3 W3
) W1 + (W2 \ W3 )
) W1
9
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
=
W3 \ (W1 + W2 )
9
W1 + W2 >
>
>
=
>
>
>
;
) W2 \ W3
W3 \ (W1 + W2 )
>
>
>
>
>
>
>
>
>
>
W3 \ (W1 + W2 ) >
>
>
>
>
>
;
" "
9
8
8
9
>
>
>
>
x 2 W3
x 2 W3
>
>
>
>
>
>
>
>
=
<
<
=
)
)
x 2 W3 \ (W1 + W2 ) )
and 9x1 2 W1 ; 9x2 2 W2 ;
and
>
>
>
>
>
>
>
>
>
>
>
>
;
:
:
;
x = x1 + x2
x 2 W1 + W2
8
9
>
>
x
2
W
\
W
(
)
>
>
2
3
>
>
< 2
=
)
) x = x1 + x2 2 W1 + (W2 \ W3 )
and
>
>
>
>
>
>
:
;
x1 2 W1
( ) because x2 2 W2 and x2 = x x1 2 W3 , because x 2 W3 and x1 2 W1 W3
Def: A subset of SL (V) is called a chain when it is totally ordered [doesnt contain incomparable
elements]; when the chain has a nite number k of elements, the number k
chain.
Def: For an element W 2 SL (V), the superior margin of all the chains between f0V g and W is called
the height of W. The elements with height 1 are called atoms or points.
Remark: Any subset of a chain is also a chain.
Remark: Any chain from SL (V) is of nite length, of at most the dimension of the embedding vector
space
Remark: Any chain from SL (V), with length k is isomorphic with the set f1;
there is a bijection between the chain and the set f1;
i
' (j) ()
j]
Def: If W1
and W2 . When the interval [W1 ; W2 ] = fW1 ; W2 g [it doesnt contain intermediary elements] the interval
is called prime and we say that W2 covers W1 .
67
* W1
W2 ) v (W1 )
Remark: Given a strictly increasing valuation v ( ), the value of an interval [W1 ; W2 ] is dened as
v ([W1 ; W2 ]) = v (W2 )
v (W1 ).
W2
W2
are inverse to each other. Moreover, the intervals [W1 \ W2 ; W1 ] and [W1 ; W1 + W2 ] are transported in
transposed isomorphic intervals by the function
W2
( ), respectively by 'W1 ( ).
68
W2 or W2
W1 ),
ment with respect to the interval. For W 2 [W1 ; W2 ], the complement of W with respect to [W1 ; W2 ] is
the subspace (W0 \ W2 ) + W1 .
Remark: Given a strictly increasing valuation v ( ), no interval can be projective with respect to a
proper subinterval.
Remark: Given a strictly increasing valuation v ( ), all the projective intervals have the same value.
Remark: Any valuation associates a unique value for each class of prime projective intervals.
Remark: If p (W) is the number of prime intervals of a chain between f0V g and W, then p ( ) is a
valuation.
Remark: Any linear combination of valuations is a valuation.
P
Remark: The structure of a valuation: v (W) = v (f0V g) +
projective intervals,
p p (W),
is a value attached with the class and p ( ) is the number of prime projective
intervals with that class, in any maximal chain between f0V g and W.
CHAPTER 2
Linear Transformations
2.0.1. Denition. (Linear Transformation) If (V1 ; K) and (V2 ; K) are vector spaces (over the same scalar
eld), a function U ( ) : V1 ! V2 is called vector spaces morphism (or linear transformation) if:
(1) U (x + y) = U (x) + U (y) ; 8x; y 2 V1 ( U ( ) is a group morphism (additivity));
(2) U ( x) = U (x) ; 8x 2 V1 ; 8 2 K (U ( ) is homogeneous).
[additivity and homogeneity together are also called linearity]
The set of all vector spaces morphisms between (V1 ; K) and (V2 ; K) is denoted by LK (V1 ; V2 ).
2.0.2. Example. Consider the vector spaces (R3 ; R) and (R2 [X] ; R) (Rn [X] is the set of all polynomials
of degree at most n, with the unknown denoted by X and with real coe cients).
The function U ( ) : P2 [X] ! R3 dened by U (P ) = xP 2 R3 (for a polynomial P (X) = aX 2 +bX +c 2
R2 [X] we attach the vector xP = (a; b; c)) is a vector spaces morphism.
The operations in (R2 [X] ; R) are (the denitions should be known from the highschool):
Def
P ( ) = ((
P ) ( )) ; where: (
P ) (X) =
P (X) :
(a; b; c) = U (P ) :
2.0.3. Denition. A linear transformation between two vector spaces which is bijective is called isomorphism.
2.0.4. Denition. (Isomorphic spaces) Two vector spaces are called isomorphic if between them there is
an isomorphism. We denote this situation by (V1 ; K) = (V2 ; K).
70
2. LINEAR TRANSFORMATIONS
2.0.5. Remark. Intuitively speaking, when two vector spaces are isomorphic, the vector space algebraic
structure does not distinguish between the spaces. Still the spaces may be dierent and they may be distinguished from other perspectives, like other abstract structures and/or other nonmathematical reasons.
An example for this type of situation is the study of the vector spaces (R2 ; R) and (C; R).
2.0.6. Proposition. Any nitetype vector space (V; K) with dimK V = n is isomorphic with the vector
space (Kn ; K) 2.0.4.
Proof. Consider a nitetype vector space (V; K) and B a basis in (V; K).
Consider the function ' ( ) : V ! Kn ; dened by ' (v) = [v]B .
The function ' ( ) is linear:
2
6 1 7
6
7
n
6 2 7
X
6
7
[v1 ]B = 6 . 7 ; v1 =
6 .. 7
i=1
6
7
4
5
6
7
6
7
n
6 0 7
X
2
6
7
i vi ; [v2 ]B = 6 . 7 ; v2 =
6 .. 7
i=1
6
7
4
5
v1 + v2 =
2
6
6
6
6
[v1 + v2 ]B = 6
6
6
4
i vi
i=1
1
2
0
1
+
..
.
0
2
0
n
0
i vi
0
n
n
X
0
1
n
X
0
i vi
i=1
=
3
n
X
0
1
i=1
0
i ) vi
7
7 6 1 7 6
7
7 6
7 6
7 6 2 7 6 0 7
7 6 2 7
7 6
7 = 6 . 7 + 6 . 7 = [v1 ]B + [v2 ]B
7 6 .. 7 6 .. 7
7 6
7
7 6
5
5 4
5 4
0
n
so ' (v1 + v2 ) = ' (v1 ) + ' (v2 ) (from the unicity of a representation in a basis).
Similar, for
2 K we have
v1 =
n
X
i=1
so ' (
v1 ) =
6
6
6
6
)
v
)
[
v
]
=
6
i
i
1 B
6
6
4
1
2
..
.
n
7
7
7
7
7=
7
7
5
6 1 7
6
7
6 2 7
6
7
6 . 7=
6 .. 7
6
7
4
5
[v1 ]B
The function is bijective: the injectivity results from the linear independence property for the basis (the
unicity of the coordinates), while the surjectivity results from the generating property of the basis.
2.0.7. Exercise. Show that the morphism from the previous example is bijective.
71
2.0.8. Denition. (Linear functional) Any linear transformation between the vector spaces (V; K) and
(K; K) is called a linear transformation over (V; K) (any element of the set LK (V; K)). The set LK (V; K)
is also denoted by V0 (= LK (V; K)) and is called the algebraic dual of the vector space (V; K).
2.1.2. Example. The integration operator, dened over the vector space of all integrable functions:
Rt
I ( ) : F ! F , I (f ( )) (t) = f ( ) d .
a
2.1.3. Example. Consider the vector spaces (R3 ; R) and (R2 [X] ; R) (Rn [X] is the set of all polynomials
of degree at most n, in the unknown X and with real coe cients).
The function U ( ) : R2 [X] ! R3 dened by U (P ( )) = xP ( ) 2 R3 (for a polynomial P (X) =
aX 2 + bX + c 2 P2 [X] we attach the vector xP ( ) = (a; b; c)) is a morphism of vector spaces.
72
2. LINEAR TRANSFORMATIONS
(2) [[19], Prop. 6.1, page 91] U ( ) is linear , GU ( ) = f(x; U (x)) ; x 2 V1 g is a vector subspace in
(V1
V2 ; K).
(3) U ( ) is linear )
(a) U (0V1 ) = 0V2 ;
(b) 8 vector subspace V01
V2 , U
Note: for points 3.b and 3.c we use the direct image of a set by a function and the preimage
of a set by a function; see the Denition 7.2.18 and the Remark 7.2.19.
additivity
U ( x1 ) + U ( x2 )
homogeneity
2 K, we have:
U (x1 ) + U (x2 ).
for
for
V2
may be viewed as a vector space (the product vector space) with the operations:
(v1 ; v2 ) + (w1 ; w2 ) = (v1 + w1 ; v2 + w2 ) (the addition of the place i is the addition of the vector
space Vi )
(v1 ; v2 ) = ( v1 ; v2 ) (the multiplication with a scalar of the place i is the multiplication with
a scalar of the vector space Vi )
Because of the vector space properties of the structures (V1 ; K) and (V2 ; K), the structure
(V1
73
additivity
(v1 + v2 ; U (v1 + v2 )) 2
Gu( ) .
Also for
w1 =
2 K,
(v1 ; U (v1 )) = ( v1 ; U (v1 ))
homogeneity
( v1 ; U ( v1 )) 2 GU ( ) .
"("
Assume that GU ( ) is a vector subspace in (V1
Then all the pairs (v1 ; U (v1 )), (v2 ; U (v2 )), (v1 + v2 ; U (v1 + v2 )) belong to GU ( ) , which is a
subspace, so that (v1 ; U (v1 )) + (v2 ; U (v2 )) = (v1 + v2 ; U (v1 ) + U (v2 )) 2 GU ( ) .
Since the set GU ( ) is the graphic of a function,
9
(v1 + v2 ; U (v1 ) + U (v2 )) 2 GU ( ) =
) U (v1 ) + U (v2 ) = U (v1 + v2 )
;
(v1 + v2 ; U (v1 + v2 )) 2 GU ( )
[because otherwise there would be an element v1 + v2 for which the function would have two
images, which contradicts the denition of a function, see 7.2.16, page 207]
In a similar way, if (v1 ; U (v1 )), ( v; U ( v)) 2 GU ( ) , then also
x) = U (x)
U (x) = 0V2 .
(b) Consider ;
(c) Consider ;
2 K, x1 ; x2 2 U
(V02 ).
2.2.2. Denition. The codomain subspace U (V1 ) is called the image of the linear transformation and is
denoted by Im U ( ); its dimension is also called the rank of the linear transformation and is denoted by
dim Im U ( ) = rank U ( ).
2.2.3. Denition. The domain subspace U
is denoted by ker U ( ); its dimension is also called the nullity of the linear transformation and is denoted
by dim ker U ( ) = null U ( ).
2.2.4. Theorem (The ranknullity theorem for linear transformations). Consider two vector spaces
(V1 ; K) and (V2 ; K), with V1 of nitetype. For a linear transformation U ( ) : V1 ! V2 , we have:
dim V1 = dim (ker U ( )) + dim (Im U ( )).
74
2. LINEAR TRANSFORMATIONS
; uk g
; wk = U (uk )g
Im U ( )
V1 .
; vp g
ker U ( )
V1 .
k
P
i=1
i ui
i=1
Denote u =
k
P
i ui
i=1
Then U (x u) = U (x)
p
P
that x u =
j vj .
U (u) = U (x)
U (x) = 0, so that x
u) + u.
u 2 ker U ( ) ) 9 1 ,
2 K such
j=1
We get that x =
p
P
j vj
j=1
k
P
so that V1 = span B.
i ui
i=1
p
P
j vj
j=1
p
P
0
j
j=1
vj =
k
P
i=1
0
i
i ) ui
is a basis]
0
i
i , 8i = 1; k )
j=1
i ui
p
P
j=1
i=1
0
j vj
k
P
i=1
0
i ui ,
then
k
P
i=1
p
P
k
P
0
i
i ) ui
0
j
)0=
k
P
i=1
0
i
i ) wi
0
j,
; wk g
8j = 1; p so
75
(x) = u, U
(y) = v;
For
(x + y) = U
(x) + U
2 K, x 2 V2 , U
(x + y) = u + v
U ( v) = U (v) ) U ( v) = x ) v = U
( x) ) U
(x) = U
( x).
2.2.7. Proposition. Consider three vector spaces over the same eld and the linear transformations
U1 ( ) : V1 ! V2 , U2 ( ) : V2 ! V3 .
Then the function U ( ) : V1 ! V3 dened by U (v) = U2 (U1 (v)), 8v 2 V1 is also linear (the composition preserves linearity of functions).
Proof. Let v1 ; v2 2 V1 ; we have U (v1 + v2 ) = U2 (U1 (v1 + v2 ))
additivity
of U1 ( )
additivity
of U2 ( )
U ) ( ), U n ( ) = (U
|
{z
U ) ( ).
}
n times
n
P
k=0
may talk about the polynomial operator, which is a new operator (from Proposition 2.2.7) with the form
n
P
given by: p (U ( )) =
ak U k ( ), where U 0 ( ) = IV ( ) (the identity operator on V).
k=0
2.2.10. Remark. When p ( ) and q ( ) are two polynomials, the composition of the two attached operator
Uj ( ) = Uj
an exercise.
2.2.11. Remark. The relation = ( Denition (2.0.4)) is an equivalence relation between vector spaces
over the same eld (it is reexive, symmetric and transitive) (the relation is dened over a set of vector
spaces and it establishes equivalence classes).
76
2. LINEAR TRANSFORMATIONS
Proof. Reexivity follows from the fact that the identity operator IV1 ( ) : V1 ! V1 , U (v) = v is
linear and bijective, so V1 = V1 .
The symmetry follows from Proposition (2.2.6), because when V1 = V2 , then 9 U ( ) : V1 ! V2 linear
bijective ) U
( ) : V2 ! V1 linear bijective ) V2 = V1 .
Transitivity follows from Proposition (2.2.7) because when V1 = V2 and V2 = V3 then there are some
isomorphisms U ( ) : V1 ! V2 and V ( ) : V2 ! V3 and the new function (V
U ) ( ) : V1 ! V3 preserves
2.2.12. Remark. The set LK (V1 ; V2 ) together with the usual algebraic operations with functions
Def
( U1 ( )) (x) =
U1 (x) ;
has a vector space structure over the eld K (in particular the algebraic dual (??) is a vector space over
the eld K).
Proof. (LK (V1 ; V2 ) ; +) has a group structure (from the properties of addition over V2 ) with neutral
element the null operator O ( ) : V1 ! V2 , O (v)
0.
i=1
iU
(ei ).
; U (en )) (any
n
P
i ei
i=1
i=1
Consider a linear transformation U ( ) : V1 ! V2 between two nitetype vector spaces over the same
eld K.
Assume dim V1 = n, dim V2 = m and x the ordered bases Bd = (e1 ; ::; en ) in V1 and Bc = (f1 ; ::; fm )
in V2 .
Consider the representations in the codomain of the images of the vectors of the basis from the domain:
U (ej ) =
77
m
P
i=1
[U (ej )]Bc
d
The matrix which has as columns [U (ej )]Bc , namely [U ( )]B
Bc = [aij ] i=1;n = [U (e1 )]Bc
[U (ej )]Bc
[U (
j=1;m
(R) and for each possible choice of ordered bases in both the
domain and the codomain there is an unique associated linear transformation dened by the above formula
[U (x)]Bc = A [x]Bd . In other words, the function U ( ) : Rn ! Rm dened as U (x) = Ax is a linear
transformation.
When x 2 V1 with coordinates in Bd given by [x]Bd
U (x) = U
n
P
xj e j
j=1
n P
m
P
n
P
xj U (ej ) =
j=1
aij xj fi =
j=1 i=1
3
x1
7
6
6 . 7
= 6 .. 7, then
5
4
xn
2
m
P
n
P
i=1
n
P
m
P
xj
aij fi =
j=1
i=1
!
aij xj
j=1
fi
[U (x)]Bc
2 P
n
a x
6 j=1 1j j
6 n
6 P
6
a2j xj
6
= 6 j=1
6
..
6
6 n .
4 P
amj xj
j=1
7
7 6 a11 a12
7 6
7 6 a21 a22
7 6
7=6 .
..
7 6 ..
.
7 6
7 4
5
am1 am2
d
[U (x)]Bc = [U ( )]B
Bc [x]Bd :
3 2
a1n 7
7
a2n 7
7
.. 7
..
.
. 7
7
5
amn
6
6
6
6
6
6
6
4
x1 7
7
x2 7
7
= [U (e1 )]Bc
.. 7
7
. 7
5
xn
[U (ej )]Bc
d
The representation matrix [U ( )]B
Bc is unique because for two matrices A and B the following happens:
if 8x 2 Rn , Ax = Bx, then A = B.
2.3.2. Proposition. The linear transformation U ( ) is bijective if and only if its attached matrix (for a
certain choice of bases) is invertible.
78
2. LINEAR TRANSFORMATIONS
Proof. ) When U ( ) is bijective, form Remark 2.2.5 the two spaces have equal dimensions (so
that the matrix is square) and for each y 2 V2 the system Ax = y has a unique solution (the unicity comes
from injectivity while the existence comes from surjectivity).
Suppose by contradiction that the matrix is not invertible; then det A = 0, which means that the
columns of the matrix are linear dependent. Consider a nonzero linear combination for the null vector; in
matrix form, this means a nonzero solution of the system Ax = 0, which is a contradiction with the unicity
of the null solution, contradiction obtained from the hypothesis that the matrix would not be invertible.
In conclusion there exists the matrix A 1 .
( When the matrix A is invertible, then, keeping the same basis for each space, the function
V ( ) : V2 ! V1 dened by V (y) = A 1 y is exactly the inverse function for U ( ), because
U (V (y)) = U (A 1 y) = A (A 1 y) = y = 1V2 (y) and V (U (x)) = V (Ax) = A
2.3.3. Remark.
(1) The rank of the linear transformation (Def. 2.2.2, page 73) equals the rank of
the matrix representing the linear transformation (in a certain choice for bases in both the domain
and the codomain).
(2) A linear transformation is injective if and only if its rank equals the dimension of the domain.
(3) A linear transformation is surjective if and only if its rank equals the dimension of the codomain.
2.3.4. Remark. When the basis in V1 changes from Bd to Bd0 and the basis in V2 changes from Bc to Bc0
the representation of the linear transformation changes in the following way:
B0
Bd
[U (x)]Bc = [U ( )]B
[x]Bd ; [U (x)]Bc0 = [U ( )]Bdc0 [x]B 0 ;
c
d
and so
d
[U (x)]Bc = [M (Bc )]Bc10 [U (x)]Bc0 = [U ( )]B
Bc [x]Bd )
d
[U (x)]Bc0 = [M (Bc )]Bc0 [U ( )]B
Bc [x]Bd =
1
d
= [M (Bc )]Bc0 [U ( )]B
Bc [x]Bd [M (Bd )]B 0 [x]B 0 :
d
; vn g in (V; K) and
a basis Bc = fwg in (K; K), any linear functional f ( ) : V ! K (which is a particular type of linear
transformation) admits a unique representation:
f (v) = f
n
X
i=1
i vi
n
X
i=1
if
(vi ) ;
79
so that the value of the linear functional in v is uniquely determined by the values of the linear functional
at the basis vectors and by the coordinates of the vector. In matrix form:
2
3
[f (v)]Bc =
6 1 7
6
7
i6 2 7
7
6
f (vn ) 6 . 7 =
6 .. 7
6
7
4
5
f (v1 ) f (v2 )
f (v1 ) f (v2 )
f (vn )
[v]Bd :
( i ( ))i=1;n is a basis in the dual space V0 [called the dual basis of E in V]. It follows that for nitetype
vector spaces the dual space is isomorphic with the space: V ' V0 .
u v v (mod V0 ) , u
v 2 V0 :
v = 0 2 V0
w) 2 V0 ) u v w (mod V0 ).
v and v
(u
v) 2 V0 ) v v u (mod V0 ).
w 2 V0 ) u
w = (u
v) +
The equivalence relation " v (mod V0 )" generates on V equivalence classes with respect to V0 (mod
V0 ); they will be denoted by x
b (mod V0 ); x^ is the set of all elements of V which are equivalent with x mod
V0 :
x
b (mod V0 ) = fv 2 V; x v v (mod V0 )g = x + V0 = fx + v0 ; v0 2 V0 g :
The dimension of an equivalence class is considered by denition to be equal with the dimension
of V0 .
Two equivalence classes may only be identical or disjoint:
Proof. ; =
6 x
b \ yb () 9z0 2 x
b \ yb () 9v0 ; u0 2 V0 , z0 = x + v0 = y + u0 .
80
2. LINEAR TRANSFORMATIONS
Let v 2 x
b ) 9v1 2 V0 , v = x + v1 ) v = y + (u0
Similar, we also get yb
v0 + v1 ) 2 yb ) x
b
yb.
x
b, so that two equivalence classes which are not disjoint are identical.
Proof. x 2 V ) x 2 x
b [each element belongs to at least an equivalence class mod V0 and, as two
2.4.2. Denition. The set of all equivalence classes is called the factor set mod V0 ; this set is denoted by
V=V0 = fb
x (mod V0 ) ; x 2 Vg
(it is a set of equivalence classes, so it is a set of sets)
2.4.3. Remark. For each xed x 2 V, the function
( ) : V0 ! (b
x (mod V0 )) dened by
(v) = x + v
is a bijection.
Proof. Injectivity: let v1 , v2 2 V0 such that
(v1 ) =
Surjectivity: y 2 x
b (mod V0 ) = 9vy 2 V0 , y = x + vy )
(v2 ) ) x + v1 = x + v2 ) v1 = v2 .
(vy ) = x + vy = y.
2.4.4. Proposition. With the elements of the set V=V0 we may dene vector space operations:
Def
Addition modV0 : (b
x (mod V0 )) + (b
y (mod V0 )) = x[
+ y (mod V0 )
(b
x (mod V0 )) = cx (mod V0 )
Def
Proof. Addition x
b + yb = x[
+ y is well dened (doesnt depend on representatives) because, when
x
b=x
b1 and yb = yb1 , then x
x1 and y
y1 2 V0 ) (x + y)
(x1 + y1 ) 2 V0 so that x[
+ y = x\
1 + y1 .
and so
(x
x
b = cx.
x
b = cx is a well dened operation, because when x
b=x
b1 then x x1 2 V0
)x
b;1 x
b = 1dx = x
b:
So (V=V0 ; K) has a vector space structure (together with the operations between classes dened above).
( ) : V ! (V=V0 ) given by
(x) = x
b is a (surjective) vector space mor-
Proof.
morphism.
(x + y) = x[
+y =x
b + yb =
( x) = cx = x
b =
x0 2 V0 )
81
ker ( )
(x) = b
0 2 V=V0 g = V0 :
(x) = b
0)x
b=b
0 ) x 2 V0 .
2.4.6. Denition. The vector space V=V0 is called the factor space of V with respect to V0 .
2.4.7. Theorem. (The dimension of the factor space) Consider a nitetype vector space (V; K)
and a subspace V0 . Then
dim (V=V0 ) = dim V
Proof. Choose a basis x1 ;
Consider the set yb1 ;
because x1 ;
r
P
j yj ,
i=1
Consider
if 0 6= v0 =
that the set x1 ;
; yr is a basis in V we have 8v 2 V 9 i ;
r
P
i=1
2 K such that
r
P
j yj
i=1
; ybr in V=V0 ;
; xk ; y1 ;
which means vb =
dim V0 :
; xk ; y1 ;
; yr in V.
2 K such that v =
k
X
i xi
i=1
| {z }
2V0
bj
jy
r
P
i=1
and so yb1 ;
bj
jy
=b
0)
r
P
i=1
j yj
2 V0 ;
; xk ; y1 ;
are
zero.
This means that yb1 ;
; ybr is a basis for the factor space V=V0 and dim V=V0 = r, which means
dim V0 .
U~ ( ) is well dened (the denition doesnt depend on the representatives) because when x
b = yb then
y 2 ker U ( ), which means U (x
U~ ( ) is a morphism:
y) = 0 () U (x) = U (y).
82
2. LINEAR TRANSFORMATIONS
~ x1 ) + U~ (b
additivity: U~ (b
x1 + x
b2 ) = U~ x\
x2 );
1 + x2 = U (x1 + x2 ) = U (x1 ) + U (x2 ) = U (b
homogeneity: U~ ( x
b) = U~ (cx) = U ( x) = U (x) = U~ (b
x).
x2 2 ker U ( ) ) x
b1 = x
b2 .
p (v)) = p (v)
p2 (v) = p (v)
ker p ( ).
The sum is direct because v 2 p (V) \ ker p ( ) ) 0 = p (v) and 9u, v = p (u) ) 0 = p (v) = p2 (u) =
p (u) = v so p (V ) \ ker p ( ) = f0g.
2.5.5. Proposition. If p ( ) : V ! V is a projection, then (1V
Moreover, ker (1V
p) ( ) = Im p ( ) and Im (1V
p) ( ) ) x
p) ( ) = ker p ( ).
p (x) = 0 ) x = p (x) 2 Im p ( ).
p) ( ) ) 9y, x = (1V
p) ( ) : V ! V is also a projection.
p) (y) = y
p (p (y)) = p (y)
p) ( )
p (y) = 0 )
p (x) = 0 ) x 2 ker p ( ).
x 2 ker p ( ) ) p (x) = 0 ) x = x
p (x) = (1V
p) (x) ) x 2 Im (1V
p) ( ).
2.5.6. Proposition. Consider a vector space (V; K) and a subspace V0 . Any complement of V0 in V
(i.e. a subspace V1 such that V0
isomorphic.
2.5.2. Other Isomorphism Theorems.
2.5.7. Theorem. (The Second Isomorphism Theorem) Consider a vector space (V; K) and V1 and
V2 subspaces in V. Then the vector spaces V1 = (V1 \ V2 ) and (V1 + V2 ) =V2 are isomorphic.
83
' (x + y) = x[
+y =x
b + yb = ' (x) + ' (y), and
' ( x) = cx = x
b = ' (x).
x
b1 ) ' (x1 ) = y ) the function is surjective.
From Theorem 2.5.1, page 81 it follows that the spaces V1 = ker ' ( ) and Im ' ( ) are isomorphic, which
means that the vector spaces V1 = (V1 \ V2 ) and (V1 + V2 ) =V2 are isomorphic.
2.5.8. Corollary. Consider a vector space (V; K) and two subspaces V1 and V2 . Then
dim V1 + dim V2 = dim (V1 + V2 ) + dim (V1 \ V2 ) :
Proof. From the previous theorem we know that V1 = (V1 \ V2 ) and (V1 + V2 ) =V2 are isomorphic,
so they have the same dimension. Then
dim (V1 = (V1 \ V2 )) = dim ((V1 + V2 ) =V2 )
and from (Theorem 2.4.7, page 81) we know that
dim (V1 = (V1 \ V2 )) = dim V1
dim (V1 \ V2 )
and
dim ((V1 + V2 ) =V2 ) = dim (V1 + V2 )
dim V2
so we get
dim V1
dim V2 :
2.5.9. Theorem. (The Third Isomorphism Theorem) Consider a vector space (V; K) and V1 and
V2 subspaces in V such that V
V1
V2 .
Then the vector spaces ((V=V2 ) = (V1 =V2 )) and V=V1 are isomorphic.
Proof. Denote by x
b = x + V2 the class of x with respect to V2 (that is, an element of V=V2 ) and
with x
e = x + V1 the class of x with respect to V1 (that is, an element of V=V1 );
dene the function ' ( ) : (V=V2 ) ! (V=V1 ) by ' (b
x) = x
e.
x2 2 V 2 ) x1
x2 2 V 1 ) x
e1 = x
e2 .
84
2. LINEAR TRANSFORMATIONS
' (b
x2 ) and ' ( x
b) = ' (cx) = fx = x
e = ' (b
x).
means x
b 2 V1 =V2
V=V2 .
(V=V2 )
and Im ' ( ) are isomorphic, which means
ker ' ( )
dim (ker U ( )) = dim (Im U ( )) ) dim (V1 ) = dim (ker U ( )) + dim (Im U ( )).
()
() dim (V1 ) =
such that T = R S.
b
Proof. Consider the linear transformations S^ ( ) : X= ker S ( ) ! Y and T^ ( ) : X= ker T ( ) ! Z,
dened by:
b b
S^ (^
x) = S (x) and T^ x
^ = T (x).
b
The functions S^ ( ) and T^ ( ) are welldened (Exercise!).
b
The functions S^ ( ) and T^ ( ) are linear transformations (Exercise!).
85
b
The functions S^ ( ) and T^ ( ) injective (Exercise!).
) S^ ( ) bijective ) 9S^
( ) : Y ! X= ker S ( ) isomorphism.
b^.
Consider the function P ( ) : X= ker S ( ) ! X= ker T ( ), dened by P (^
x) = x
P ( ) is well dened (Exercise!).
( ). Then
b
b b
= T^ (P (^
x)) = T^ x
^ = T (x).
2.5.12. Remark. Conversely, if X, Y, Z are vector spaces over the same eld of scalars K and S ( ) : X ! Y
and T ( ) : X ! Z are linear transformations such that there is another linear transformation R ( ) : Y ! Z
with T = R S, then it happens that ker S ( )
ker T ( ) (Exercise!).
86
2. LINEAR TRANSFORMATIONS
V0 :
2.6.2. Remark. When a linear operator over an ndimensional vector space has an mdimensional invariant subspace, the matrix corresponding to a basis in which the rst m vectors form a basis for the
invariant
subspace
3 has
2 the form 3
2
A
A12
A
A12
5 = 4 11
5 2 Mn;n (with A21 = 0 2Mn m;m ).
4 11
0 A22
A21 A22
[When the basis is such that the subbasis is on the last places, then the matrix will have A12 = 0]
Proof. 1Consider a vector space (V; K) with dimension dimK V =n and V0 a subspace in V, with
dimK V0 = m.
Choose a basis of V such that its rst m vectors are a basis for V0 :
Start with the set B0 = fv1 ; v2 ;
is linear dependent, then vm+1 would be a linear combination of B0 , a contradiction with vm+1 2 V n V0 ).
Repeat the procedure for a nite number of times (n
; vm ; vm+1 ;
; vn g = B0 [ fvm+1 ;
; vn g for V in which B0 =
; vm g is a basis for V0 .
In this basis, the vectors of the subspace are represented as a column with zero on the last n
places.
Consider a linear operator U ( ) : V ! V for which U (V0 )
properties as above, be xed in V considered both as the domain and as the codomain.
The matrix which represents U ( ) is a matrix which has as columns [U (vj )]B :
U (v1 ); U (v2 );
; U (vm ); U (vm+1 ) ;
; U (vn ) are the images by the linear operator of the basis vectors
and they get represented in the codomain with respect to the same basis.
Since V0 is invariant with respect to U ( ), the images of the set B0 remain in V0 :
fU (v1 ); U (v2 );
; U (vm )g
m positions:
a
6 1j
6
6 a2j
6
6 .
6 ..
6
m
6
P
U (vj ) =
aij vi ; [U (vj )]B = 6
6 amj
i=1
6
6
6 0
6
6 ..
6 .
4
0
The matrix of U ( ) has the form:
2
a
a1m
a1(m+1)
6 11
6
6 a21
a2m
a2(m+1)
6
6 .
.
..
..
6 ..
.
6
6
6 a
amm
am(m+1)
6 m1
6
6
0 a(m+1)(m+1)
6 0
6
..
..
6 ..
6 .
.
.
4
0
0
an(m+1)
87
7
7
7
7
7
7
7
7
7 ,8j = 1; m.
7
7
7
7
7
7
7
5
3
a1n
a2n
amn
a(m+1)n
ann
7
7
7
7
7
2
7
7
7
6
7, which is of the form 6
7
4
7
7
7
7
7
7
5
A11
m m
A12
m (n m)
A22
(n m) (n m)
(n m) m
7
7.
5
2.6.3. Remark. When the space may be represented as a direct sum of U ( )invariant subspaces, then
the matrix representation of U ( ) in a suitable basis (in a basis obtained as an union of bases of the
invariant spaces) has the form
6 A11 0
6
6 0 A22
6
6
6
6
4
0
0
0
0
..
.
Akk
3
7
7
7
7
7
7
7
5
V2 with U (V1 )
V1 and U (V2 )
V2 and consider a
; fk1 ; g1 ;
; gk2 ;
then the last k2 coordinates of U (fj ) are zero (because they are representable only on V1 ) while the rst
k1 coordinates of U (gj ) are zero (because they are representable only on V2 ).
2
3
A11 0
5.
We get the pseudodiagonal matrix form 4
0 A22
For more invariant subspaces the pseudodiagonal form is obtained by induction.
2.6.4. Remark ([42]). When V0 is U ( )invariant, it is possible for a complement of V0 not to be U ( )
invariant;
88
2. LINEAR TRANSFORMATIONS
such that
U (x) = x:
The scalar
2x
) (
2) x
and U (x) =
1x
2x
with
6=
2,
then
eigenvector is nonzero.
2.6.8. Remark. When an eigenvector x corresponds to an eigenvalue , any vector
x with
6= 0 is
we get U ( x) =
x ) U ( x) =
( x) so that
m.
1 v1
1 v1
2 v2
2 v2 )
1U
(v1 ) +
(v2 ) =
1 1 v1
2 2 v2 .
= 0, we get:
9
>
>
1 v1 + 2 v2 = 0; >
>
>
>
>
=
v
+
v
=
v
=
0
1 1 1
2 2 2
1 1 1
)
>
>
=
6
0
>
1
>
>
>
>
v1 6= 0 ;
For 2 =
6 0, we get:
2
2U
9
>
1 v1 + 2 v2 = 0; >
>
=
)
=
0
1
>
>
>
;
v2 6= 0
= 0.
9
>
1 v1 + 2 v2 = 0 >
>
=
)2
1 1 v1 + 2 2 v2 = 0
>
>
>
;
=
6
0
1
which is a contradiction.
1 2 v1
1 1 v1
m vm
U( )
=0 )
1 1 v1
= 0,
if
m ) v1 +
m 1 m 1 vm 1
m 1
subtraction
2 ) v1
=0)
if
1 1 v1
9
>
2 v2 = 0 >
>
=
2 v2 = 0
>
>
>
;
=
6
0
1
89
m 1
m m vm
= 0;
m ) vm 1
m)
=0)
1 eigenvectors,
= 0.
2.6.10. Remark. Any linear operator over a nitetype vector space has a number of distinct eigenvalues
at most equal with the dimension of the domain.
Proof. For each distinct eigenvalue we have at least an eigenvector, and since their set is a linear
independent one, the number of vectors in the set cannot exceed the dimension of the vector space.
2.6.11. Remark. Several eigenvectors may correspond to the same eigenvalue and their set may still be
linear independent.
2.6.12. Remark. Consider a xed eigenvalue
attached to
Proof. U (x1 ) =
x1 and U (x2 ) =
x2 ) U (
1 x1
2 x2 )
1 x1
2 x2 )
v.
v = 0g =
(f0g) = ker N ( ).
2.6.15. Remark. For the linear operator U ( ) : V ! V consider the same basis B both in the domain
and the codomain.
When we write the relation U (v) = v in matrix form in terms of B representation in which the linear
operator is represented by the matrix A we get
A [v] =
In ) [v] = 0;
90
2. LINEAR TRANSFORMATIONS
which is a homogeneous system with the coordinates of the vector v as unknowns and the scalar
as
parameter. The necessary and su cient condition for the system to accept nonzero solutions is that the
determinant of system matrix to be zero, that is
det (A
2.6.16. Denition. 1. The equation det (A
In ) = 0:
7! det (A
U ( ) / matrix A (it is a polynomial with the degree equal with the dimension of the vector space)
3. The roots of the characteristic polynomial are called the eigenvalues of the linear operator /matrix.
4. The multiplicity of an eigenvalue as a root of the characteristic polynomial is called the algebraic
multiplicity of the eigenvalue.
4. The set of all distinct roots of the characteristic polynomial is called the spectrum of the linear
operator and is denoted by
(A) or
(U ( )).
I) = ( 1)n
7! det (A
+ ( 1)n
A) = ( 1)n det (A
In ).
T r (A)
+ det (A)
2.6.19. Remark. The characteristic polynomial does not depend on the basis in which the linear operator
is represented (the basis in which the linear operator is represented with the matrix A).
Proof. When a linear operator is represented in two dierent bases with matrices A an B, these
1
AT
I) = det (T
= det T
det (A
AT
T ) = det (T
I) det T = det (A
(A
I) T ) =
I) :
2.6.20. Example. 1. When the matrix representing the linear operator is diagonal, A = diag (d1 ;
n
Q
the characteristic polynomial is 7!
(di
).
; dn ),
i=1
2. When the matrix representing the linear operator is upper semitriangular (which means that the
elements below the main diagonal are zero, or aij = 0, 8i; j with i
j
n
Q
polynomial is 7!
(aii
) (aii are the elements of the main diagonal).
i=1
3. When the matrix representing the linear operator has a pseudodiagonal form (Remark 2.6.3)
91
0 7
6 A11 0
7
6
7
6 0 A22
p
p
0
P
Q
7
6
with Aii 2 Mki ki and
A=6
ki = n, then det (A
In ) =
det (Aii
Iki )
7
..
7
6
i=1
i=1
.
7
6
5
4
0
0
App
(with the identity matrices of the corresponding dimension) (the characteristic polynomial is the product
of the characteristic polynomials corresponding to the linear operators attached to the submatrices of the
pseudodiagonal)
2.6.21. Remark. We consider mainly linear operators over the eld of scalars R. Solving the characteristic
equation may lead to the following possible situations:
(1) All the roots belong to R and they are distinct.
In this case, there is a basis of eigenvectors corresponding to the eigenvalues, all the eigensets
have dimension 1 and the matrix of the linear operator in this basis (for both the domain and the
codomain) has diagonal form, with eigenvalues on the main diagonal, in the order given by the
order of the eigenvectors in the basis.
(2) All the roots belong to R but some of them are not distinct.
In this case, for a given eigenvalue with (the algebraic) multiplicity greater than 1, the question
is whether the geometric multiplicity of the eigenvalue equals the algebraic multiplicity of the
eigenvalue. There are two subcases:
(a) For each eigenvalue, the geometric and the algebraic multiplicities are equal.
(b) For at least one eigenvalue, the geometric and the algebraic multiplicities are not equal.
(3) Some of the eigenvalues are not from R (they are complex, form C).
Even if the linear operator is represented by a real matrix, some of the eigenvalues may be complex
and in this case their associated eigenvectors will also have complex coordinates.
To solve this di culty we will follow an indirect approach: even if the linear operator is initially
considered over the vector space (Rn ; R), it will be considered from the beginning to be dened over the
complex vector space (Cn ; C), and in this context the (complex) Jordan canonical form will be obtained,
after which it will be applied a certain decomplexication procedure to obtain the real counterpart of the
Jordan canonical form.
Consider a nitedimensional complex vector space (V; C) and a linear operator U ( ) : V ! V represented with the matrix A in a certain xed basis, and with the characteristic polynomial det (A
k
Q
ni
0=
(
(where some of the eigenvalues may be complex numbers).
i)
i=1
In ) =
92
2. LINEAR TRANSFORMATIONS
2.6.22. Denition. The linear operator U ( ) : V ! V is called diagonalizable when there is a basis of
the vector space V where the attached matrix is diagonal.
2.6.23. Remark. The linear operator is diagonalizable () 9P an invertible matrix (which is the
changeofbasis matrix from the old basis to the new basis) such that the matrix P
AP is diagonal.
6 1k 7
6 . 7
In coordinates, [vk ]B = 6 .. 7 (the column k of the identity matrix, 1 on the line k and 0 for the rest
4
5
nk
d1
0
..
32
7 6 1k 7
6
76 . 7
6
[U (vk )]B = 6 0
. 0 7 6 .. 7 = dk [vk ]B
54
5
4
0 0 dn
nk
so that [U (vk )]B = dk [vk ]B , which means that vk is an eigenvector corresponding to the eigenvalue
dk .
This means that each vector of the basis is an eigenvector while the elements of the main diagonal of
the matrix D are eigenvalues of U ( ).
"("
When the basis B = fv1 ;
; vn g has only vectors which are eigenvectors for U ( ), then for each
n
P
k=1
U (v) = U
n
P
k=1
k vk
n
P
k=1
kU
k 1k
..
.
k nk
7
7
7.
5
6 1 7
6 .. 7
k vk (represented in the basis B), [v]B = 6 . 7 and
4
5
n
(vk ) =
n
P
k=1
k k vk ,
so that
1 1
0
...
32
0
...
93
7
76
7 6
7 6
7
7 6 .. 7 6
7 6
=
7
6
7
6
7=6 0
0
0 7 [v]B ,
0
.
5
54
5 4
5 4
0 0
0 0
n
n
n
n n
which means that the matrix attached to U ( ) in the basis B is diagonal.
6
6
[U (v)]B = 6
4
..
.
In other words, if we may nd a basis of eigenvectors, then in this basis the linear operator has a
diagonal matrix with eigenvalues on the diagonal.
2.6.25. Example. The linear operator represented in the standard basis with the matrix
2
3
4
0
1
6
7
6
7
A=6 1
6
2 7, has:
4
5
5
0
0
3
2
4
0
1
7
6
7
6
3
2 2 + 29 + 30
the characteristic polynomial: 7! det 6
1
6
2 7=
5
4
5
0
0
3
2
the characteristic equation:
2 + 29 + 30 = 0
the roots of the characteristic equation (the eigenvalues):
= 5,
6,
the eigenvectors attached to the eigenvalue 1 = 5 are the solutions of the system:
2
32
3 2 3
4
0
1
0
1
6
76 1 7 6 7
6
76
7 6 7
6
1
6
2 7 6 2 7 = 6 0 7, which means
1
4
54
5 4 5
5
0
0
0
1
3
8
8
8 2
9
3
>
>
>
>
( 1) 1 + 0 2 + 1 3 = 0
=
1
>
>
>
>
>
>
>
>
<
< 1
< 6
7
=
6
7
3
3 7;
)
,
V
=
2
R
6
( 1) 1 + ( 11) 2 + ( 2) 3 = 0
1
2 =
11
>
>
>
>
4 11 5
>
>
>
>
>
>
>
>
:
:
:
;
5 1 + 0 2 + ( 5) 3 = 0
=
1
3
8 2 3
9
>
>
>
>
> 6 0 7
>
<
=
6 7
the eigenvectors attached to the eigenvalue 2 = 6 are V 2 =
6 1 7; 2 R
>
>
4 5
>
>
>
>
:
;
0
8 2
9
3
1
>
>
>
>
>
>
< 6 5 7
=
6 9 7
the eigenvectors attached the eigenvalue 3 = 1 are V 3 =
;
2
R
6 25 7
>
>
4
5
>
>
>
>
:
;
1
We know that eigenvectors corresponding to distinct eigenvalues are linear independent from 2.6.9,
so that if we choose an eigenvector for each eigenvalue we get 3 eigenvectors which will form a linear
independent set; as it is also maximal (the embedding space has dimension 3) it is a basis.
94
2. LINEAR TRANSFORMATIONS
82
>
>
>
<6
6
If the basis is 6
>
4
>
>
:
1
3
11
3 2
3 2
0
7 6 7 6
7 6 7 6
7;6 1 7;6
5 4 5 4
0
vectors as columns)
2
3
1
1 0
5 7
6
6 3
9 7
T = 6 11 1
7, its inverse
25
4
5
1 0 1
32
2
5
1
0
4
6 76
6 6
76
6 4
T 1 AT = 6 55
76 1
1 19
55
54
4
5
5
0 6
5
6
which is a diagonal form.
is T
0
6
0
1
5
9
25
39
>
>
=
7>
7
7 , the changeofbasis matrix is (the matrix with the
5>
>
>
;
2
6
6
=6
4
32
76
76
2 76
54
0
1
6
4
55
19
55
5
6
5
6
5
6
1
3
11
0
1
0
7
7
7, while
5
1
5
9
25
7 6
7 6
7=6 0
5 4
0
0
6
0
7
7
0 7,
5
1
82
3 2
3 2 39
1
>
1
0 >
>
>
>
=
7 6 7>
<6 5 7 6
6 9 7 6 3 7 6 7
When the eigenvectors are placed in a dierent order, B = 6 25 7 ; 6 11 7 ; 6 1 7 , then
>
5 4
5 4 5>
4
>
>
>
>
;
:
0
1
1
2
2
3
3
1
5
5
1
0
0
6 7
6 5
6 6
7
6 9
6
7
7
3
T = 6 25
1 7, T 1 = 6 56 0 61 7 and
11
4
4
5
5
4
19
1
1 0
1 55
55
2
32
32
3 2
3
5
5
1
1 0 0
0 6
4
0
1
1 0
6 6
76
76 5
7 6
7
6
76
76 9
7 6
7
3
=
T 1 AT = 6 56 0 61 7 6 1
6
7
6
7
1
0 5 0 7
6
2
11
4
54
5 4 25
5 4
5
4
19
5
0
0
1
1 0
0 0
6
1 55
55
(also a diagonal form, but on the diagonal the order of the eigenvalues is changed)
82
39
3 2 3 2
>
5 >
0
11
>
>
>
7 6 7 6
7>
=
<6
7 6 7 6
7
6
If we choose dierent eigenvectors, B = 6 3 7 ; 6 5 7 ; 6 9 7 , then
>
5>
4
5 4 5 4
>
>
>
>
;
:
25
11
0
2
3
2
3
5
1
0 66
11 0
5
6 66
7
6
7
6
1 19 7
6
7
1
4
6
7 and
T =6 3 5
9 7, T = 6 275
275 7
4
5
5
4
5
1
1
11 0 25
0
30
30
2
32
32
3 2
3
5
1
0 66 7
4
0
1
11 0
5
5 0
0
6 66
6
76
7 6
7
6
7
1
6
7
6
7
6
7
4
19 7
T 1 AT = 6
=
6
7
6
7
6
7
1
6
2
3
5
9
0
6
0
6 275 5 275 7 4
54
5 4
5
4
5
1
1
5
0
0
11 0 25
0 0
1
0 30
30
2.6.26. Theorem. (The Hamilton-Cayley Theorem) Consider a linear operator U ( ) which is represented with the matrix A in a certain basis and with the characteristic polynomial P ( ) = det (A
Then P (A) = 0 2Mn
I).
95
2
=
I)
1
det (A
I)
B =
1
B ;
P( )
where B is the matrix with elements Aji (the cofactors (algebraic complements) of the matrix (A
which are polynomials of degree at most n
I)),
1 in .
+ Bn
n 1
I) B
) I = (A
+ Bn
n 1
where we multiply on the righthand term and organize by the increasing exponents of , to obtain:
0I
1I
nI
+ (ABn
= AB0 + (AB1
1
Bn 2 )
n 1
B0 ) + (AB2
+ ( Bn 1 )
B1 )
because this is an equality of two polynomials, their corresponding coe cients should be equal. We get
the equalities:
0I
AB0
1I
AB1
B0
2I
AB2
B1
=
n 1I
nI
= ABn
Bn
Bn
j
j
An
96
2. LINEAR TRANSFORMATIONS
we multiply conveniently to the left in order to obtain the cancellation of the terms to the right, and we
get
0I
AB0
1A
A2 B1
AB0
2
2A
A3 B2
A2 B1
=
n 1
n 1A
n
nA
= An Bn
An 1 Bn
An Bn
n 1
1A
2
2A
n 1A
n
nA
= 0;
which means P (A) = 0 (the null matrix) and correspondingly P (U ( )) = O ( ) (the null operator)
2.6.27. Remark. The set Mn (R) of the square matrices with dimension n is a vector space with dimension
n2 , so that for each matrix A 2 Mn (R), span Ak ; k 2 N has dimension at most n2 . The Hamilton
Cayley Theorem shows that dim span Ak ; k 2 N
doesnt takes place; the least number s for which the powers A0 , A1 ,
s
certain polynomial m ( ) =
s
P
k=0
k=0
2.6.28. Remark. (The greatest common divisor of several polynomials) A certain property which
will be used for the Jordan Canonical form is the property 2.6.3, which may be seen in particular forms
in 2.6.1 and in 2.6.2.
For two polynomials p1 (x) and p2 (x) (with real or complex coe cients), their greatest common divisor
is a new polynomial d (x) (with coe cients from the same eld) which divides the two polynomials and
with this property has maximal degree (there is no other polynomial of bigger degree which divides both
polynomials). The existence and nding of the polynomial d (x) may be obtained by factorization (which
is a procedure limited by the impossibility of eectively nding the factorization of a polynomial) or by
using the Euclid Algorithm for nding the greatest common divisor for polynomials, as the last nonzero
remainder; the unicity of d (x) may be obtained with the extra condition that the polynomial should have
the leading coe cient 12. Some of the properties of the polynomial d (x) = gcd (p1 ( ) ; p2 ( )) are:
if d1 (x) jp1 (x) and d1 (x) jp2 (x) then d1 (x) jd (x) (if a polynomial d1 ( ) divides both of p1 ( ) and
p2 ( ) then d1 ( ) also divides their greatest common divisor d ( ))
2 The
leading coe cient is the coe cient of the monomial with the greatest degree of the polynomial. These polynomials are
also called monic polynomials.
97
gcd (p1 ( ) ; p2 ( )) = gcd (p2 ( ) ; p1 ( )) (in the Euclid algorithm, it doesnt matter which polynomial
is the rst)
if gcd (p1 ( ) ; q ( )) = 1 then gcd (p1 ( ) ; p2 ( )) = gcd (p1 ( ) ; p2 ( ) q ( ))
d ( ) is the least degree polynomial with the property: there are polynomials h1 ( ) and h2 ( ) such
that
(2.6.1)
d ( ) = h1 ( ) p1 ( ) + h2 ( ) p2 ( ) ;
for three or more polynomials, the greatest common divisor is dened recursively:
gcd (p1 ( ) ; p2 ( ) ; p3 ( )) = gcd (gcd (p1 ( ) ; p2 ( )) ; p3 ( )) ;
while the property 2.6.1 may be extended for three polynomials in the following way:
gcd (p1 ( ) ; p2 ( ) ; p3 ( )) = gcd (gcd (p1 ( ) ; p2 ( )) ; p3 ( )) =
= h01 ( ) gcd (p1 ( ) ; p2 ( )) + h02 ( ) p3 ( ) =
(2.6.2)
; pk
( )) ; pk ( ))
while the property 2.6.2 may be extended for k polynomials in the following way: 9hi ( ) polynomials, i = 1; k such that
(2.6.3)
gcd (p1 ( ) ; p2 ( ) ;
; pk ( )) =
k
X
hi ( ) pi ( ) :
i=1
2.6.29. Denition. A linear operator N ( ) is called nilpotent when there is r 2 N such that N r ( ) = O ( )
(the null operator).
2.6.30. Remark. If N ( ) is nilpotent, f 6= 0 is a nonzero vector and k + 1 = min fs; N s (f ) = 0g, then
the set f; N (f ) ;
Proof.
; N k (f ) is linear independent.
k
X
(f ) = 0 j N k ( ) )
( ) we get that 8j = 1; k,
iN
i=0
0N
(f ) = 0 )
j
= 0.
= 0;
98
2. LINEAR TRANSFORMATIONS
2.6.31. Theorem. (Reducing a complex linear operator to nilpotent linear operators) Consider
k
Q
nj
a linear operator U ( ) : Cn ! Cn with the characteristic polynomial P ( ) =
(
j) .
j=1
nj
j I) ( )). Then:
= ker ((U
is an U ( )invariant subspace.
j=1
j I) (
! V j , dened by Uj (x) =
Proof.
(1) The set V
Take v 2 V
nj
j I)
= ker ((U
nj
j In )
( )); then (U
(v) = 0.
j I)
nj
(U (v)) = (U
j I)
nj
((U
= (U
j I)
nj
(U
= (U
j I) ((U
so the subspace V j
k
Q
(2) We have P ( ) =
(
jI
j I) (v)
j I)
= (U
j I) (0) +
is U ( )invariant.
nj
j) ,
nj
j In )
j I) (v))
(U (v)) = 0.
=
nj
j I)
+ (U
(v)) +
nj
(U
0 = 0 ) U (v) 2 V
( j I) (v) =
j I)
nj
(v) =
j=1
where
k
Q
P( )
.
Pj ( )
i=1;i6=j
We have gcd (Q1 ( ) ;
; Qk ( )) = 1 so from Remark 2.6.28 and the property 2.6.3
k
P
there are some polynomials hj ( ) such that
hj ( ) Qj ( ) = 1.
Pj ( ) = (
nj
j)
and Qj ( ) =
ni
i)
j=1
that hj (U ( )) Qj (U ( )) = Qj (U ( )) hj (U ( )),
k
k
P
P
hj (U ( )) ((Qj (U ( ))) (v)) =
(Qj (U ( ))) ((hj (U ( ))) (v)) = v; 8v 2 V;
j=1
j=1
99
because (Qj (U ( ))) (v) 2 V j and V j is U ( )invariant, we get that (Qj (U ( ))) ((hj (U ( ))) (v)) 2
V j , so that any vector may be written as a sum of vectors from V j , j = 1; k, and so we have
k
P
that
V j = V.
j=1
such that
k
P
j=1
Qi (U ( )) (vj ) = 0; 8j = 1; k; j 6= i )
!
k
P
Qi (U ( )) (vi ) = Qi (U ( ))
vl = 0;
l=1;l6=i
since the polynomials Pi ( ) and Qi ( ) are relatively prime, from 2.6.1 we get the existence of
the polynomials R1 ( ) and R2 ( ) such that
R1 ( ) Pi ( ) + R2 ( ) Qi ( ) = 1, which means, by passing to linear operator polynomials, that
0 = R1 (U ( )) ((Pi (U ( ))) (vi )) + R2 (U ( )) ((Qi (U ( ))) (vi )) = vi , which means that vi = 0;
we get that the decomposition is unique and the sum is direct.
(3) Consider the linear operator Uj ( ) : V
! V j , Uj (v) = U (v), 8v 2 V
(the restriction of U ( )
over V j ).
Because V
= ker ((U
j I)
nj
( )), we have (U
j I)
nj
j I) (
)+
jI (
) = Nj ( ) +
jI (
j I) (
j I)
nj
2.6.32. Theorem. (The Jordan canonical form for nilpotent operators) Consider a nilpotent
linear operator N ( ) : V ! V over a nitetype vector space V. Then there is a basis for which the matrix
representation of N ( ) is a Jordan block attached to the zero eigenvalue.
Proof. N ( ) nilpotent ) 9!r 2 N such that ker N r ( ) = V and ker N r
(the existence comes from the nilpotency of N ( ), while the unicity is a result of r being the smallest);
moreover, we have x 2 ker N k ( ) ) N k (x) = 0 ) N N k (x) = 0 ) x 2 ker N k+1 ( ) so that:
ker N ( )
ker N k ( )
ker N 2 ( )
ker N r ( ) = V
(the kernels of the successive powers of N ( ) are forming a chain, which is nite (because of the nite
dimension of V) and equal with V from the exponent r above) the resulting chain is:
f0g = ker N 0 ( )
ker N 1 ( )
ker N r
ker N 2 ( )
Not
()
6=
ker N r ( ) = V
m1
m2
mr
< mr = n.
(v) =
100
2. LINEAR TRANSFORMATIONS
()
Not
()
Qr ;
mr
Not
1
= p1 .
i=1
ker N r
fp 1 g
( ):
(N (fj )) = 0 ) N (fj ) 2 N r
( ) (because
( )).
Moreover, if any linear combination with these vectors would belong to ker N r 2 ( ), then
p1
p1
p1
P
P
P
r 2
= 0 from where
= Nr 1
( ) which means that N r 2
i fi
i N (fi )
i N (fi ) 2 ker N
i=1
we get
i fi
i=1
2 ker N r
())
= 0,
( ) n ker N r
qr = p1 (because Qr
( ), so fN (f1 ) ;
; N (fp1 )g
= qr
= mr
; N (fp1 )g
mr
Not
i=1
i=1
p1
P
; N (fp1 )g in Qr
up to a basis of Qr
p1 and
; fp1 +p2 .
; N (fp1 ) ; fp1 +1 ;
fp 1 g
ker N r
= span ff1 ;
fp 1 g
Qr
ker N r
; N 2 (fp1 ) ; N (fp1 +1 ) ;
()=
()=
; N (fp1 ) ; fp1 +1 ;
; fp1 +p2 g
; fp1 +p2 g to get the set
ker N r
( ):
guments as above) in Qr 2 .
Moreover, qr
we obtain that qr
f1 ;
N (f1 ) ;
N 2 (f1 ) ;
Not
qr
qr
1
1
and denote by p3 = qr
qr
q2
qr
= qr
p1
fp 1 ;
;
N (fp1 ) ;
fp1 +1 ;
; N 2 (fp1 ) ;
N (fp1 +1 ) ;
basis in Qr 1 ;
fp1 +p2 ;
N (fp1 +p2 ) ; fp1 +p2 +1 ;
basis in Qr 2 ;
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Nr
(f1 ) ;
; N r 1 (fp1 ) ; N r 2 (fp1 +1 ) ;
; N r 2 (fp1 +p2 ) ;
; fp1 + +pr 1 +1 ;
The last line has only eigenvectors, all attached to the zero eigenvalue.
; fp1 +
+pr
1 +pr
basis in Q1 :
101
Each column of the table is a linear independent set which determine an N ( )invariant subspace; the
rst p1 subspaces have dimension r; the next p2 subspaces have dimension r
subspaces have dimension 1. The entire space V is a direct sum of the subspaces on the columns; for the
rst column, choose as basis the set
Nr
(f1 ) ; N r
(f1 ) ;
In this basis the restriction of N ( ) is given by the values of N ( ) at the vectors forming the basis:
f1 2 V = ker N r ) N r (f1 )2= 0 3
0 7
7
0 7
7
.. 7
. 7
7
7
7
0 7
5
0
2 3
6 1 7
6 7
6 0 7
6 7
6 . 7
r 2
r 1
. 7
N (N
(f1 )) = N
(f1 ) = 6
6 . 7
6 7
6 7
6 0 7
4 5
0
::::::::::::::::::::::::::::::::::::::::::::::::::::::
2
2 3
3
0 7
6 0 1
6 0 7
6
6 . 7
.. 7
6 0 0
6 .. 7
. 7
7
6
6 7
6 . .
6 7
7 Not
6
6
7
.
.
N (f1 ) = 6 0 7 ) N jsp inv (x) = 6 . .
0 7
7 x = J0 (r) x;
6
6 7
7
6
6 7
7
1 7
6 0 0
6 1 7
4 5
4
5
0 0
0
0
nally, we get for the matrix representation of N ( ) the following structure:
2
p1 cells
J (r)
6 0
6
...
6
0
0
with
6
6
6
order r
J0 (r)
6
6
6
p2 cells
J0 (r 1)
6
6
6
..
0
0
.
6
with
6
6
6
order r 1
J0 (r 1)
6
6
..
..
..
..
6
.
.
.
.
6
6
6
pr cells
J0 (1)
6
6
6
...
0
0
with
6
4
order 1
J0 (1)
6
6
6
6
6
r 1
r
N (N
(f1 )) = N (f1 ) = 6
6
6
6
6
4
3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5
102
2. LINEAR TRANSFORMATIONS
2.6.33. Denition. An r
the form:
6
6
6 0
6
J (r) = 6
6
6
4
0
..
...
0
+ rk )
(r1 +
0 7
7
0 7
7
7:
1 7
7
5
;r
1
k1
r1 2 ;
is a matrix of dimension
7
7
7
7
7:
7
7
5
s)
0
; rk22
r1 s ;
rkii ,
...
0
when it has
; rkss
7
7
7
7
7:
7
7
5
2.6.1. Decomplexication of the Complex Jordan Canonical Form. When the eld of scalars
is real, the eigenvalues are generally speaking complex numbers, and so they dont belong to the eld of
scalars; moreover, the attached eigenvectors have complex coordinates and thus they dont belong to the
vector space. The following procedure obtains a pseudodiagonal form for this situation:
Consider the space (Rn ; R) and a linear operator U ( ) : Rn ! Rn with the representation (in the
standard basis) given by the matrix A [[U (x)]E = A [x]E ]. The the characteristic polynomial P ( ) =
det (A
+ i 2 C n R;
when a polynomial with real coe cients has a complex root, it also has as root its complex conjugate).
The same matrix A denes a new linear operator over Cn given by [U (x)]E = A [x]E . Over the eld C
this linear operator admits a Jordan basis in which for the complex eigenvalue
we have m corresponding
basis vectors (some of them are eigenvectors) with complex coordinates, denoted by f1 ;
complex conjugates f1 ;
; fm ; then their
103
; Af1q = f1q
Af11 = f11 ;
.........................................................
Afn11 = fn11
+ fn11 ;
; Afnqq = fnqq
+ fnqq
The vectors fjk are linear independent and they form the corresponding part of the Jordan basis for
the eigenvalue .
Starting from the vectors attached to these two complex conjugated eigenvalues we may build a basis
with real coordinates by replacing each pair of complex conjugated vectors fjk ; fjk with the pair of real
vectors
gjk =
1
2
1
2i
fjk
+ fjk ;
Afjk = fjk
+ fjk
i Im ( )) (Re (f )
i Im (f )) =
2 Im ( ) Im (f ) = 2 (Re ( ) g
Im ( ) h) ;
f = (Re ( ) + i Im ( )) (Re (f ) + i Im (f ))
(Re ( )
i Im ( )) (Re (f )
i Im (f )) =
= 2i Re ( ) Im (f ) + 2i Im ( ) Re (f )
it follows that
Agjk = gjk
Ahkj = hkj
+ Re ( ) gjk
+ Re ( ) gjk + Im ( ) hkj
0
0
Im ( ) hkj ;
1
A
0
@
+ i ) are:
1
A
104
2. LINEAR TRANSFORMATIONS
0
B
B
B
B
B
B
B
B
B
B
B
B
B
@
0
0
0
0
0
1 0 0 C
B
B
C
B
B
C
B
B 0
0
0
C
B
B
C
B
B
C
B 0 0
B 0 0
1 C
B
B
A
@
@
0 0
0 0 0
1
0
1 0 0 0 0
C
B
C
B
B
1 0 0 0 C
C
B
C
B
B 0 0
0
0 0 0 C
C
B
C
B
C
B 0 0
0 0
1 0 C
B
C
B
C
B
0 0 0
1 C
B 0 0
A
@
0 0
0 0 0 0
:::::::::::::::::::::::::::::::::
1
0
0 C
C
1 C
C
C
C
C
A
0
1
0
C
C
0 C
C
C
0 C
C
C
1 C
C
C
C
C
A
105
2 C, j = 1; k,
j=1
= ker (U
( ).
(4) Find the restriction Uj ( ) of the linear operator U ( ) over the set V j , Uj ( ) : V
! V j;
Uj (x) = U (x) ; 8x 2 V j .
(5) Find the nilpotent linear operator Nj ( ) : V
(Uj
j I) (x),
j I (x)
8x 2 V j .
ker Nj2 ( )
ker Nj ( )
ker Nj j
()
ker Nj j ( ) = V j ;
nd
rj = min dim ker Njk ( ) = dim V
k
= nj
()
Qjk ;
mjk 1 ; k = 1; rj
(9) Find
pj1 = mjrj
mjrj
pj2 = qrjj
pj1 ,
pj3 = qrjj
qrjj
= qrjj ,
= qrjj
pj1
pj2
fpjj +1 ;
1
; fpjj +pj
1
2
fpjj ;
1
1;
; Nj fpjj
up to a
106
2. LINEAR TRANSFORMATIONS
f1j ;
fpjj ;
basis in Qjrj ;
Nj f1j ;
fpjj +1 ;
Nj fpjj +1 ;
Nj fpjj ;
; Nj2 fpjj ;
Nj2 f1j ;
basis in Qjrj
fpjj +pj ;
1
basis in Qjrj
1;
2;
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
r
Nj j
f1j ;
; Nj j
fpjj ; Nj j
1
fpjj +1 ;
1
; Nj j
fpjj +pj ;
1
; fpjj +
1
+pjr
1 +1
; fpjj +
1
+pjr
j
1 +prj
basis in Qj1 :
The Jordan basis is obtained by ordering the above vectors in the following way: choose the
vectors on the columns, from below to above (for each column, from the bigger exponent to the
smaller exponent) for example, for the rst column:
r
Nj j
f1j ; Nj j
f1j ;
; Nj f1j ; f1j
(13) Obtain in this basis a matrix for the linear operator which has pj1 cells of order rj , pj2 cells of order
rj
1, and so on.
107
2.6.3. Examples.
4
4
2.6.36. Example (One eigenvalue
with multiplicity
2
3 4). Consider U ( ) : R ! R with the matrix in the
1
1 0 7
6 1
6
7
6 1 3
7
0
1
6
7
standard basis given by A = 6
7; determine the Jordan canonical form and a Jordan
6 1 0
1 1 7
6
7
4
5
0
1
1 1
basis.
2
3
1
1 0 7
6 1
6
7
6 1 3
7
0
1
6
7
Step 1: the matrix is A = 6
7.
6 1 0
1 1 7
6
7
4
5
0
1
1 1
I4 ) =
0
0
1
=(
1)4 = 0 )
= 1, n1 = 4.
= 1:
4
Consider the
(U
1R4 )4 ( ); its matrix is (A
1 1R4 ) ( ) = (U
2 linear operator 3
1
1
0 7
6 0
6
7
6 1 2
7
0
1
6
7 Not.
A I4 = 6
7 = B.
6 1 0
7
2
1
6
7
4
5
0
1
1 0
2
32 2
3
1
1
0 7
2 2 7
6 0
6 2 2
6
7
6
7
6 1 2
7
6 2 2
7
0
1
2
2
6
7
6
7
(A I4 )2 = 6
7 =6
7 = B2.
6 1 0
7
6
2 1 7
2 2
2 7
6
6 2
7
4
5
4
5
0
1
1 0
2
2 2
2
2
33 2
3
1
1
0 7
6 0
6 0 0 0 0 7
6
7
6
7
6 1 2
7
6 0 0 0 0 7
0
1
6
7
6
7
(A I4 )3 = 6
7 =6
7 = B3.
6 1 0
7
6
7
2 1 7
6
6 0 0 0 0 7
4
5
4
5
0 0 0 0
0
1
1 0
I4 )4 .
108
2. LINEAR TRANSFORMATIONS
(A
So V
1
1
0
6 0
6
6 1 2
0
1
6
4
I4 ) = 6
6 1 0
2 1
6
4
0
1
1 0
1
= ker (U
n1
1 I)
34
7
6 0
7
6
7
6 0
7
6
7 =6
7
6 0
7
6
5
4
0
( ) = ker (U
0 0 0 7
7
0 0 0 7
7
7 = B4.
0 0 0 7
7
5
0 0 0
1R4 )4 ( ) = R4 .
! V 1 ; U1 (x) = U (x), 8x 2 V
is:
1 I4
= n1 = 4
ker N1 ( )
I4
N1 ( )
N12
N13
()
()
109
mjk
qkj
f0g
m10 = 0
82
3 2 39
>
>
>
2 7 6 1 7>
>
>
>
>
6
>
>
>
7
7
6
>
6
>
=
<6 1 7 6 0 7>
7 6 7
6
span 6
m11 = 2
7;6 7
>
7
7
6
>
6
>
>
>
6 1 7 6 0 7>
>
>
>
5 4 5>
4
>
>
>
>
: 0
1 ;
82 3 2
3 2 39
>
>
>
>
>
>
>
6 1 7 6 1 7 6 1 7>
>
>
>
7
7
6
7
6
>
6
>
=
<6 1 7 6 0 7 6 0 7 >
7 6 7
6 7 6
span 6 7 ; 6
m12 = 3
7;6 7
>
7
7
6
7
6
>
6
>
>
>
6 0 7 6 1 7 6 0 7>
>
>
>
5 4 5>
4 5 4
>
>
>
>
: 0
1 ;
0
m13
=4
q11 = m11
=2
q21 = m12
=3
q31 = m13
=4
p13 = q11
q21 = 2
1 = 1 ) the
2
6 1 1
6
6 0 1
6
6
So the Jordan matrix is 6
6 0 0
6
6
6
4
0 0
1=2
2=1
Step 10 [Finding the Jordan basis]: For each k = 1; r1 = 1; 3 consider the decomposition
ker N1k ( ) = ker N1k
()
Q1k ;
ker N1 ( )
ker N12 ( )
ker N13 ( )
ker N1 ( )
Q12
ker N12 ( )
= R4
Q13
k
ker N1 ( )
Q12
m10 =
0=2
m11 =
2=1
m12 =
3=1
110
2. LINEAR TRANSFORMATIONS
82
>
>
>
>
6
>
>
6
>
<6
6
ker N12 ( ) = span 6
>
6
>
>
6
>
>
4
>
>
:
3 2
39
>
>
>
7>
>
7>
=
7>
7
7
7>
>
7>
>
5>
>
>
;
3 2
1 7 6 1 7 6 1
7 6
7 6
6 0 7 6 0
1 7
7 6
7 6
7;6
7;6
6
7
6
0 7 6 1 7
7 6 0
5 4
5 4
1
0
0
4
ker N13 ( ) = R4 = ker N12 ( ) Q13 ) complete the basis of ker N12 ( ) up to a basis of
; the completion
2R3
6 1 7
6 7
6 0 7
6 7
may be done with any vector from R4 n ker N12 ( ), for example with the vector v1 = 6 7.
6 0 7
6 7
4 5
0
82 3 2
3 2 3 2 39
>
>
>
1 7 6 1 7 6 1 7 6 1 7>
>
>
>
6
>
>
6 7 6
7 6 7 6 7>
>
>
>
<6 1 7 6 0 7 6 0 7 6 0 7>
=
6 7 6
7 6 7 6 7
The set 6 7 ; 6
7 ; 6 7 ; 6 7 is linear independent, so it is a basis of R4 = ker N13 ( ) in
>
6
7
6
7 6 7 6 7>
>
>
6 0 7 6 1 7 6 0 7 6 0 7>
>
>
>
>
4
5
4
5 4 5 4 5>
>
>
>
>
: 0
0
1
0 ;
2
which the rst 8
32
vectors
2 for
3 ker N1 ( ).
39are a basis
>
>
>
1 7>
>
>
6 1 7
6
>
>
>6 7>
6 7
>
>
>
6 0 7
=
<6 0 7>
6 7
6 7
1
= ker N12 ( ) ) N12 (v1 ) 6= 0
Q3 = span 6 7 . v1 = 6 7 2 R4 = ker N13 ( ) ) N13 (v1 ) = 0 v1 2
6
7
>
6
7
>
>
>6 0 7>
6 0 7
>
>
>
>
4 5
4 5>
>
>
>
;
: 0 >
0
3
2
N1 (v1 ) = 0 ) N1 (N1 (v1 )) = 0 ) N1 (v1 ) 2 ker N12 ( ) and N1 (N12 (v1 )) = 0 so that N12 (v1 ) 2 ker N1 ( )
f0g = ker N10 ( )
dim=2
ker N1 ( )
dim=3
dim=4
ker N12 ( )
ker N13 ( )
dim=2
ker N1 ( )
2N12 (v1 )
1
1
6 1 7
6 0
6 7
6
6 0 7
6 1 2
0
6 7
6
v1 = 6 7, N1 (v1 ) = 6
6 0 7
6 1 0
2
6 7
6
4 5
4
0
0
1
1
2
3
6 2 7
6
7
6 2 7
6
7
6
7 the vectors v1 , N1 (v1 ), N12 (v1 )
6 2 7
6
7
4
5
2
32
0 76
76
6
1 7
76
76
6
1 7
76
54
0
Q12
dim=3
ker N12 (
2N1 (v1 )
= R4
dim=1
Q13
2v1
1 7 6 0 7
1
1
0
6 0
7 6
7
6
6
7
6 1 2
0 7
0
1
7 6 1 7 2
6
7=6
7, N1 (v1 ) = 6
6
7
6 1 0
0 7
2 1
7 6 1 7
6
5 4
5
4
0
0
0
1
1 0
32
76 0 7
76
7
76 1 7
76
7
76
7=
76 1 7
76
7
54
5
0
111
To complete the basis, another vector has to be chosen to correspond to the cell of order 1, which
means from ker N1 ( ), which should be linear independent with the one already chosen from ker N1 ( ),
(v21 ).
which means with N128
>
>
>
>
6 2
>
>
6
>
<6 1
6
ker N1 ( ) = span 6
>
6 1
>
>
6
>
>
4
>
>
: 0
2 3
6 1 7
6 7
6 0 7
6 7
6 7
6 0 7
6 7
4 5
1
3 2
7 6 1
7 6
7 6 0
7 6
7;6
7 6 0
7 6
5 4
1
3
2
3
2
39
2
>
>
6
>
6 2 7
7>
6 2 7
7
6
>
7
6
7>
6
>
6
6 1 7
7=
6 2 7
7
6
7
6
6
7
7 + 26
7 = 26
7 and 6
7
6
7
6
7>
6
6
>
6 1 7
7>
6 2 7
>
5
4
5
4
5>
4
>
>
;
0
2
1 7
7
0 7
7
7 2 ker N1 ( ) Chose v2 =
0 7
7
5
1
82
3 2
3 2 3 2 39
>
>
>
>
>
>
6 2 7 6 0 7 6 1 7 6 1 7>
>
>
>
>
7
6
7
6
7
6
7
6
>
>
=
<6 2 7 6 1 7 6 0 7 6 0 7>
6
7 6
7 6 7 6 7
;
;
;
The Jordan basis is J = fN12 (v1 ) ; N1 (v1 ) ; v1 ; v2 g = 6
7 6
7 6 7 6 7
6 2 7 6 1 7 6 0 7 6 0 7>
>
>
>
6
7
6
7 6 7 6 7>
>
>
>
>
4
5
4
5 4 5 4 5>
>
>
>
>
: 2
0
0
1 ;
The matrix of N1 ( ) in this basis has as columns the representations of the images through N1 ( ) of
6 1 7
6 0 7
6 7
6 7
6 0 7
6 0 7
6 7
6 7
N1 (N12 (v1 )) = 0 ) [N1 (N12 (v1 ))]J = 6 7 N1 (N1 (v1 )) = N12 (v1 ) ) [N1 (N1 (v1 ))]J = 6 7
6 0 7
6 0 7
6 7
6 7
4 5
4 5
0
0
2 3
2 3
6 0 7
6 0 7
6 7
6 7
6 1 7
6 0 7
6 7
6 7
N1 (v1 ) = N1 (v1 ) ) [N1 (v1 )]J = 6 7 N1 (v2 ) = 0 v2 ) [N1 (v2 )]J = 6 7
6 0 7
6 0 7
6 7
6 7
4 5
4 5
0
0
2
3
6 0 1 0 0 7
6
7
6 0 0 1 0 7
6
7
So the matrix of N1 ( ) in the basis J is 6
7.
6 0 0 0 0 7
6
7
4
5
0 0 0 0
The connection between the nilpotent linear operator and the restriction of the original linear operator
(in this case, exactly the original linear operator) is
N1 (x) = U1 (x)
is:
1 I4
(x) ) U1 (x) = N1 +
1 I4
112
2. LINEAR TRANSFORMATIONS
6 1 0
6 0 1 0 0 7
7
6
6
6 0 1
6 0 0 1 0 7
7
6
6
7+1 6
6
6 0 0
6 0 0 0 0 7
7
6
6
5
4
4
0 0
0 0 0 0
The change
ofbasis matrix
2
3
0 0 7 6 1 1
7 6
6
0 0 7
7 6 0 1
7=6
6
1 0 7
7 6 0 0
5 4
0 0
0 1
from the
2 standard
1
1 1 7
4
6 0
7
6
1
6 0
1 0 0 7
7
6
2
1
7)C =6
6 1
1 0 0 7
1
7
6
5
4
0 0 1
0 1
32 2
2
1
1
0 76 1
4
4
6 0
76
6
1
1
76 1
6 0
0
76
6
2
2
J = C 1 AC = 6
76
6
6 1
1 1
1 7
76 1
6
54
4
1
0 12
0
1
2
[the2Jordan decomposition
32
2 the initial matrix]
3 of
6 2
6
6 2
6
C=6
6 2
6
4
2
1
4
0 0 7
7
1 0 7
7
7
1 0 7
7
5
0 1
basis to3the basis J is:
0 7
7
1
7
0
7
2
7 and the Jordan canonical
1
1 7
7
5
1
1
2
3
32
1 1 7
1
1 0 76 2 0
7
76
7
6 2
1
0
0
3
0 1 7
7
76
7 =
76
7
6
1 0 0 7
0
1 1 76 2
7
5
54
2
0
0 1
1
1 1
32
1 1 0 0 76
76
6
0 1 1 0 7
3
76
76
6
0 0 1 0 7
0
76
54
0 0 0 1
1
2
6 1 1 0 j
6
6 0 1 1 j
6
6
We observe the structure of the Jordan cells: 6
6 0 0 1 j
6
6
j
6
4
0 0 0 j
6 1
6
6 1
6
6
6 1
6
4
0
0 7 6 2
7 6
6
0 1 7
7 6 2
7=6
6
1 1 7
7 6 2
5 4
2
1 1
1
1 1 76
76
6
1 0 0 7
76
76
6
1 0 0 7
76
54
0 0 1
1
4
1
2
32
0 7
7
0 7
7
7
0 7
7.
7
7
7
5
1
1
4
1
2
1
1
2
form is:
6 1 1 0
6
6 0 1 1
6
6
6 0 0 1
6
4
0 0 0
0 7
7
0 7
7
7 or
0 7
7
5
1
0 7
7
0 7
7
7
1 7
7
5
1
x01
32
1 1 76 1 1
76
6
1 0 0 7
76 0 1
76
6
1 0 0 7
76 0 0
54
0 0
0 0 1
2
1
4
6 0
6
1
6 0
6
2
multiply the equality with 6
6 1
1
6
4
0 1
2
32
3 22
1
1
0 7 6 x01 7 6 1
4
4
6 0
6
76
7 6
1
1
6 0
7 6 x0 7 6 0
0
6
76 2 7 6
2
2
6
76
7=6
6 1
7
6
0 7
1 1
1 7 6 x3 7 6
6
6 0
4
54
5 4
1
0 21
1
0
x04
2
Apply
2
3the 2change of variables:3 2
6
7 6 2
6
7 6
6 x0 7 6 2
6 2 7 6
6
7=6
6 x0 7 6 2
6 3 7 6
4
5 4
2
x04
1
4
1
4
32
1
4
0 0 76 0
76
1
1
6
1 0 7
76 0
2
2
76
7
6
1 0 76 1
1 1
54
1
0 1
0 12
2
3
1
0 7
4
7
1
0 7
7
2
7 to the left:
1
1 7
7
5
1
1
2
32
1
1
1 0 0 76 0
4
4
76
1
1
6
1 1 0 7
76 0
2
2
76
6
0 1 0 7
1 1
76 1
54
1
0 0 1
0 12
2
y10
0 7 6 x1 7
7 6
6
6 y1 7 6 0
7 6
6
7
76
6
7 6
1
1
0 7
6
6
6
7
7
6 y2 7 6 0
0 7 6 x2 7
6 y2 7 6
6
7 6
2
2
7=6
7)6
76
6
7=6
6 y0 7 6
76 x 7
6 y 7 6 1
1
1
1
6 3 7 6
76 3 7
6 3 7 6
5 4
4
5
54
4
5 4
1
1
0
y4
x4
1
0 2
y4
2
The
in the variables
32
3y is:2
3
2
3 system
2 new
0
6 y1 7 6 1 1 0 0 7 6 y1 7 6 y1 + y2 7
76
7 6
7
7 6
6
6 y 0 7 6 0 1 1 0 7 6 y2 7 6 y2 + y3 7
76
7 6
7
6 2 7 6
76
7=6
7
7=6
6
7
6 y0 7 6 0 0 1 0 7 6 y 7 6 y
76 3 7 6
7
6 3 7 6
3
54
5 4
5
5 4
4
y4
0 0 0 1
y4
y40
Solve the system in y:
2
3 2
t
t
6 y1 (t) 7 6 e te
6
7 6
6 y2 (t) 7 6 0 et
6
7 6
6
7=6
6 y (t) 7 6
6 3
7 6 0 0
4
5 4
y4 (t)
0 0
Return
to3 the2original
2
6 x1 (t) 7 6 2
6
7 6
6 x2 (t) 7 6 2
6
7 6
6
7=6
6 x (t) 7 6 2
6 3
7 6
4
5 4
x4 (t)
2
1
4
1
4
1
2
1
2
32
0 76
76
6
0 7
76
76
6
1 7
76
54
1
113
x1 7
7
x2 7
7
7
x3 7
7
5
x4
32
0 76
76
6
0 7
76
76
6
1 7
76
54
1
1
4
1
2
1
1
2
x1 7
7
x2 7
7
7
x3 7
7
5
x4
32
x01
0 76
7
7
76
0 7
6
7
0 7 6 x2 7
7
76
6 x0 7
1 7
76 3 7
5
54
0
x4
1
1 1 76
76
6
1 0 0 7
76
76
6
1 0 0 7
76
54
0 0 1
y1 (t) 7
7
y2 (t) 7
7
7=
y3 (t) 7
7
5
y4 (t)
c3
t2
+ c2 t + c1 et
2
114
2. LINEAR TRANSFORMATIONS
6 2
6
6 2
6
=6
6 2
6
4
2
32
1 1 76
76
6
1 0 0 7
76
76
6
1 0 0 7
76
54
0 0 1
te
et
1 2 t
te
2
tet
et
32
0 76
76
6
0 7
76
76
76
0 76
54
t
e
2 t
c1 7 6 c4 e
2c1 e + c3 (e
t e ) 2tc2 e
7 6
2 t
t
t
t
t
6
c2 7
7 6 c3 (t e + te ) c2 (e + 2te ) 2c1 e
7=6
6
2 t
c3 7
tet ) c2 (et 2tet ) + 2c1 et
7 6 c3 (t e
5 4
c4
2c1 et + c4 et + 2tc2 et + t2 c3 et
3
7
7
7
7
7
7
7
5
115
I4 ) =
= 1:
+1 = (
1)2 ( + 1)2
0
1
)
)
= 1, n1 = 2,
1, n2 = 2
2
Consider the
1R4 )2 ( ); its matrix is (A I4 )2 .
1 1R4 ) ( ) = (U
2 linear operator (U3
0
1 7
6 1 2
7
6
7
6 1
1
0
0
7 Not.
6
A I4 = 6
7 = B (the matrix has rank 3).
6 0
7
1
1
0
6
7
5
4
0
0
1
1
2
32
2
3
0
1 7
4
1 2 7
6 1 2
6 3
6
7
6
7
6 1
7
6 2 3
7
1
0
0
0
1
6
7
6
7
(A I4 )2 = 6
7 = 6
7 = B 2 (the matrix has rank 2) (by
6 0
7
6
1
1 0 7
2 1
0 7
6
6 1
7
4
5
4
5
0
0
1
1
0
1
2 1
raising at other powers, the rank of the matrix B k will stay the same).
The
2 set
6 3
6
6 2
6
6
6 1
6
4
0
V
4
3
2
1
1
= ker (U 3 2 1 I)n3
()2
= ker3(U
1
2
2 76
76
6
1 7
76
76
6
0 7
76
54
1
x1 7 6
7 6
6
x2 7
7 6
7=6
6
x3 7
7 6
5 4
x4
0 7
7
0 7
7
7,
0 7
7
5
0
1R4 )2 2
( ) is the set of solutions
of the system:
3
6 x1 = 3a + 2b; 7
6
7
6 x2 = 2a + b 7
6
7
meaning 6
7;
6 x =a
7
6 3
7
4
5
x4 = b
116
2. LINEAR TRANSFORMATIONS
6 2 7
6 3 7
7
6
6 7
6 1 7
6 2 7
7
6
6 7
with v1 = 6 7, v2 = 6
7, we get: V
6 0 7
6 1 7
7
6
6 7
5
4
4 5
1
0
V 1.
1
1
1
1
Step 4: Determine
U1 ( )3of U2( ) over
3 V , U1 ( ) : V ! V ; U1 (x) = U (x), 8x 2 V :
32
2 the restriction
1 76 3 7 6 4 7
6 0 2 0
2
3
76 7 6 7
6
6 1 0 0 0 76 2 7 6 3 7
2
76 7 6 7
6
5 (the scalars
U (v1 ) = Av1 = 6
7 6 7 = 6 7 = 2v1 + ( 1) v2 ) [U (v1 )]B1 = 4
6 0 1 0 0 76 1 7 6 2 7
1
76 7 6 7
6
54 5 4 5
4
1
0
0 0 1 0
2 and 1 may be found
3 1) =
2 from the system
3 2 U (v
2 av13+ bv2 )
6 0 2 0
6
6 1 0 0
6
U (v2 ) = Av2 = 6
6 0 1 0
6
4
0 0 1
1 and 0 may be found from the
1 76
76
6
0 7
76
76
6
0 7
76
54
0
system
2 7 6 3 7
2 3
7 6 7
6 2 7
1
1 7
7 6 7
7 = 6 7 = 1 v1 + 0 v2 ) [U (v2 )]B1 = 4 5 (the scalars
6 7
0
0 7
7 6 1 7
5 4 5
1
0
U (v2 ) = av1 + bv2 )
3
2
2 1
5 [x] in the basis B1 = fv1 ; v2 g of V 1 .
We get U1 ( ) : V 1 ! V 1 ; [U1 (x)]B1 = 4
B1
1 0
Step 5: The nilpotent linear operator N1 ( ) : V 1 ! V 1 (attached to the eigenvalue 1 ) is the
restriction over V
) over V
2
2 1
1
5 1 4
In the basis B1 N1 ( ) has the matrix 4
1 0
0
2
32 2
1
1
0
5 =4
Remark that the matrix is nilpotent: 4
1
1
0
Step 6: Find the chain of kernels:
1
of (U
1 I4 ) (
) (or (U1
2
ker N1 ( )
1 I2 ) (
ker N12 ( )
Its kernel:
).
0
1
0
0
5=4
f0g
1
1
5.
5.
ker N1r1
()
dimension:
0 = m10
82
39
< 1 =
1
1
4
5 span 4
5
N1 ( )
1 = m11
:
;
1
1
1
82
2
3
3 2 39
< 1
0
0
1 =
2
4
5
4
5
4
5
N1 ( )
span
;
2 = m12
:
;
0 0
1
0
The kernel of N1 ( ) is the set of all solutions of the system:
I2
2
ker N1r1 ( ) = V
2
4
1
1
1
1
32
54
x1
x2
5=4
0
0
117
x1 = a
x2 =
5. Calculate
= n1 = 2
Q1k ;
()
ker N12 ( )
ker N1 ( )
= R2
k
ker N1 ( )
Q12
82
39
< 1 =
5 (which is a basis for ker N1 ( )) up to a basis for ker N12 ( ) = R2 , for example
Complete the set 4
:
1 ;
2 3
1
with the vector [u2 ]B1 = 4 5 (may be chosen any vector from ker N12 ( ) n ker N1 ( )).
0
2
32 3 2
3
1
1
1
1
54 5 = 4
5 2 ker N1 ( ).
[u1 ]B1 = [N1 (u2 )]B1 = 4
1
1
0
1
2 3
02 31 2
32 3
1
1
1
1
1
54 5 =
Remark: if the chosen vector would be, for example, 4 5, then N1 @4 5A = 4
1
1
1
1
1
2
3
2
3
2
1
4
5, which is from ker N1 ( ), but not exactly the same vector 4
5, but a certain linear combina2
1
tion.
The Jordan basis in V
1 with multiplicity n2 = 2:
1:
118
2. LINEAR TRANSFORMATIONS
Consider (U
2
1 7
6 1 2 0
7
6
6 1 1 0 0 7
7 Not.
6
A + I4 = 6
7 = C (rank 3).
6 0 1 1 0 7
7
6
5
4
0 0 1 1
32 2
2
1 7
1
2
6 1 2 0
6 3 4
7
6
6
6 1 1 0 0 7
6 2 3 0
1
7
6
6
(A + I4 )2 = 6
7 =6
6 0 1 1 0 7
6 1 2 1
0
7
6
6
4
5
4
0 0 1 1
0 1 2
1
powers the matrix B, the corresponding rank remains
The
2
6 3
6
6 2
6
6
6 1
6
4
0
set V
4
2
2
6
6
6
6
with v3 = 6
6
6
4
7
7
7
7
7 = C 2 (rank 2) (if we continue raising to successive
7
7
5
the same).
n1
2
3I) 2(
2
) =3ker (U
3 set of all solutions of the system:
2 + 1R4 ) ( ) is the
2 7 6 x1 7 6 0 7
6 x1 = 3a + 2b; 7
7
6
76
7 6 7
6 x2 = 2a b 7
6 x2 7 6 0 7
1 7
7
6
76
7 6 7
7
76
7 = 6 7, ) 6
7
6 x =a
6
7 6 7
0 7
7
6 3
7 6 x3 7 6 0 7
5
4
54
5 4 5
x4 = b
0
1
x4
3
2
3
3 7
6 2 7
7
6
7
6 1 7
2 7
7
6
7
7, v4 = 6
7, we get: V 2 = span (v3 ; v4 ); the set B2 = fv3 ; v4 g is a basis in V 2 .
7
6
7
1 7
6 0 7
5
4
5
0
1
= ker3(U
2
2
Step 4: Find the2restriction U2 ( 3
) of
2 U ( )3over2the set3V , U2 ( ) : V
! V 2 ; U2 (x) = U (x), 8x 2 V 2 :
1 76 3 7 6 4 7
6 0 2 0
2
3
6
76
7 6
7
6 1 0 0 0 76 2 7 6 3 7
2
6
76
7 6
7
5 (the scalars
U (v3 ) = Av3 = 6
76
7=6
7 = 2v1 + v2 ) [U (v3 )]B2 = 4
6 0 1 0 0 76 1 7 6 2 7
1
6
76
7 6
7
4
54
5 4
5
0 0 1 0
0
1
2 and 1 may be found
av3 + bv
3 4)
2 from the system
3 2 U (v33 ) = 2
1 76
6 0 2 0
6
76
6 1 0 0 0 76
6
76
U (v4 ) = Av4 = 6
76
6 0 1 0 0 76
6
76
4
54
0 0 1 0
scalars 1 and 0 may be found from the
2 7
6 3 7
2
3
7
6
7
7
6
7
1 7
1
6 2 7
5 (the
7 = 6
7 = v1 + 0 v2 ) [U (v4 )]B2 = 4
7
6
7
0 7
0
6 1 7
5
4
5
1
0
system U (v4 ) = av3 + bv4 )
2
3
2
1
5 [x] with B2 = fv3 ; v4 g a basis of V 2 .
We get U2 ( ) : V 2 ! V 2 ; [U2 (x)]B2 = 4
B2
1
0
Step 5: The nilpotent linear operator N2 ( ) : V 2 ! V 2 (attached to the eigenvalue 2 ) is the
restriction over V
2 I4 ) (
2 I2 ) (
) over V 2 ).
ker N2 ( )
32
1
2
5 =4
0
3
0 0
5+1 4
0 0
kernel:
dimension:
N20 ( )
f0g
0 = m10
N2 ( )
N22 ( )
82
39
< 1 =
5
5 span 4
4
1 = m11
;
:
1
1
1
82
3 2 39
2
3
<
1
1 =
0 0
5;4 5
4
5
span 4
2 = m12
;
:
1
0
0 0
2
1
x1 = a
x2 =
Find
0 1
5=4
ker N2r1 ( ) = V
()
1 0
5.
5.
ker N2r1
ker N22 ( )
119
1
1
32
54
x1
x2
5 = 4
0
0
5, which
= n2 = 2
()
Q2k ;
ker N2 ( )
ker N22 ( )
= R2
k
ker N2 ( )
Q22
82
39
< 1 =
5 (which is a basis over ker N2 ( )) up to a basis of ker N22 ( ) = R2 , for example
Complete the set 4
:
1 ;
2 3
1
with th vector [u4 ]B2 = 4 5 (may be chosen any vector from ker N22 ( ) n ker N2 ( )).
0
2
32 3 2
3
1
1
1
1
54 5 = 4
5 2 ker N2 ( ).
[u3 ]B2 = [N2 (u4 )]B2 = 4
1
1
0
1
The Jordan basis in V 2 for the linear operator N2 ( ) is B2 = fu3 ; u4 g, where:
120
2. LINEAR TRANSFORMATIONS
3
5
0
2 3
1
N2 (u4 ) = u3 ) [N2 (u4 )]B = 4 5
2
0
3
3
2
2
0 1
0 1
5
5 [x] , which means that B2 is a Jordan basis for N2 ( ) while 4
We get [N2 (x)]B = 4
B2
2
0 0
0 0
is the Jordan canonical form for the linear operator N2 ( ).
Assemble all the
results3in the initial basis of R4 for both eigenvalues:
2 obtained
3
2
6 3 7
6 2 7
2 3
2
3
6 7
6
7
6 2 7
6 1 7
1
1
6 7
6
7
5, [u2 ] = 4 5,
For 1 : v1 = 6 7, v2 = 6
7, B1 = fv1 ; v2 g is a basis of V 1 , [u1 ]B1 = 4
B1
6 1 7
6 0 7
0
1
6 7
6
7
4 5
4
5
0
1
1
basis
B1 = fu1 ; u2 g Jordan
3 in V2
2
6
6 3 7
7
6
6
6
6 2 7
7
6
6
For 2 : v3 = 6
7, v4 = 6
6
6 1 7
7
6
6
5
4
4
0
2 7
3
2
2 3
7
7
1 7
1
1
5, [u4 ] = 4 5,
7, B2 = fv3 ; v4 g is a basis in V 2 , [u3 ]B2 = 4
B2
0 7
1
0
7
5
1
is:
1 7
7
1 7
7
7
1 7
7
5
1
121
6 3 7
7
6
6 2 7
7
6
u4 = 1 v3 + 0 v4 = 6
7
6 1 7
7
6
5
4
0
1 3
6 1 3
6
6 1 2 1
2
6
The changeofbasis matrix is: 6
6 1 1
1 1
6
4
1 0 1
0
2 1
3
1 32
0
6 4
4
2 76 0 2 0
6
6 1
1
1
1 7
76 1 0 0
6
7
6 4
4
4
4 76
Verication: 6 1
6
3
1
76 0 1 0
6
0
76
6
4 2 54
4 4
1
1 1
1
0 0 1
4
4
4 4
The
of the initial
3 2
3 2matrix:
2 Jordan decomposition
1 7 6 1 3
1 3 76 1 1
6 0 2 0
6
7 6
76
6 1 0 0 0 7 6 1 2 1
76 0 1
2
6
7 6
76
=
6
7 6
76
6 0 1 0 0 7 6 1 1
6
1 1 7
6
7 6
76 0 0
4
5 4
54
0 0 1 0
1 0 1
0
0 0
7
6
6
7
7
6
7
6
7, its inverse: 6
7
6
7
6
5
4
32
1 76
76
6
0 7
76
76
6
0 7
76
54
0
0
0
1
0
1 3
1 2
1 0
0 76
76
6
0 7
76
76
6
1 7
76
54
1
1 1
32
1
4
1
4
1
4
1
4
1
1
4
1
4
1
4
1
4
0
1
4
0
1
34
3
4
1
4
3
4
1
24
3 7 6
7 6
6
2 7
7 6
7=6
6
1 7
7 6
5 4
0
0
1
4
0
1
4
3
4
1
4
3
4
1
4
1
2
1
4
1
2
1
4
1 1
0 1
0 0
0 0
1
2
1
4
1
2
1
4
7
7
7
7
7,
7
7
5
0
0
1
0
0 7
7
0 7
7
7
1 7
7
5
1
7
7
7
7
7
7
7
5
2.6.39. Example. Usage of the previous decomposition for solving the attached linear homogeneous
system of ordinary dierential
equations:
8
>
>
x_ 1 = 2x2 x4
>
>
>
>
>
< x_ 2 = x1
Consider the system
.
>
>
x
_
=
x
>
3
2
>
>
>
>
: x_ 4 = x3
Use the
A = BJB 1 , where:
2 Jordan decomposition
3
6 0 2 0
6
6 1 0 0
6
A=6
6 0 1 0
6
4
0 0 1
2
1
6 1 3
6
6 1 2 1
6
B=6
6 1 1
1
6
4
1 0 1
1 7
7
0 7
7
7,
0 7
7
5
0
3
3 7
7
2 7
7
7, B
1 7
7
5
0
6
6
6
6
=6
6
6
4
1
4
1
4
1
4
1
0
1
4
0
1
3
4
1
4
3
4
1
1
2
1
4
1
2
1
3
7
7
7
7
7
7
7
5
122
2. LINEAR TRANSFORMATIONS
0 7
6 1 1 0
7
6
7
6 0 1 0
0
7
6
J =6
7
6 0 0
1 1 7
7
6
5
4
0 0 0
1
to solve the system.
2
3 2
32
3
1 7 6 x1 7
6 x_ 1 7 6 0 2 0
6
7 6
76
7
6 x_ 2 7 6 1 0 0 0 7 6 x2 7
6
7 6
76
7
The matrix form: 6
7=6
76
7 ) use the Jordan decomposition:
6 x_ 7 6 0 1 0 0 7 6 x 7
6 3 7 6
76 3 7
4
5 4
54
5
x_ 4
0 0 1 0
x4
3
2
3 2
32
32 1
3
1 32
0
1 3 76 1 1 0
0 76 4
6 x_ 1 7 6 1 3
4
2 7 6 x1 7
6
7
6
7 6
76
76 1
1
1
1 7
7 6 x2 7
6 x_ 2 7 6 1 2 1
6
6
2 7
0 7
7
6
7
6
7 6
76 0 1 0
76 4
4
4
4 76
7 B 1 [to the left] )
6
7=6
76
76 1
3
1
7
6
7
6 x_ 7 6 1 1
7
6
7
6
0
1 1 76 0 0
1 1 76
7 6 x3 7
6 3 7 6
4 2 54
5
4
5 4
54
54 4
1
1 1
1
x4
x_ 4
1 0 1
0
0 0 0
1
4 3 24
4 4
2
3
2
3
3
2 1
1
3
1
3
1 32
0
0 76 4 0
6 4
4
2 7 6 x_ 1 7 6 1 1 0
4
2 7 6 x1 7
7
6
6
7
6
7
6
7
6 1
1
1
1 76
1
1
1 7
76 1
7 6 0 1 0
7 6 x2 7
6
0
x
_
2
76 4
7 6
7
6
4
4
4 76
4
4
4 76
) 6 41
76 1
6
7=6
6
7
3 1 7
3 1 7
7
6
6
7
6
7
7
6
7
6
1 1 76
0
0
7 6 x_ 3 7 6 0 0
7 6 x3 7
6
4 2 54
4 2 54
4 4
54 4
5 4
5
1
1 1
1
1 1
1
1
0 0 0
1
x_ 4
x4
4
4
4 24 3 2
43
4
4 4
1
3
1 32
0
6 y1 7 6 4
4
2 7 6 x1 7
6
7 6 1
6
7
1
1
1 7
6 y2 7 6
7 6 x2 7
6
7 6 4
7
6
7
4
4
4 76
Change of variables: 6
7=6 1
7)
3
1
6 y 7 6
76 x 7
0
6 3 7 6
76 3 7
4 2 54
4
5 4 4
5
1
1
1 1
y4
x4
4
4 3 4 4
2
3 2 1
3
1 32
0
6 y_ 1 7 6 4
4
2 7 6 x_ 1 7
6
7 6 1
6
7
1
1
1 7
6 y_ 2 7 6
7 6 x_ 2 7
6
7 6 4
6
7
4
4
4 7
)6
7=6 1
76
7 ) the initial system becomes:
3
1
6 y_ 7 6
7 6 x_ 7
0
6 3 7 6
76 3 7
4 2 54
4
5 4 4
5
1
1
1 1
y_ 4
x_ 4
4
4 342 4 3 2
2
3 2
3
0 7 6 y1 7 6 y1 + y2 7
6 y_ 1 7 6 1 1 0
6
7 6
76
7 6
7
7 6 y2 7 6 y2
7
6 y_ 2 7 6 0 1 0
0
6
7 6
76
7 6
7
6
7=6
76
7=6
7)
6 y_ 7 6 0 0
6
7 6
7
1 1 7
6 3 7 6
7 6 y3 7 6 y4 y3 7
4
5 4
54
5 4
5
y_ 4
0 0 0
1
y4
y4
8
>
>
y_ 1 = y1 + y2
>
>
>
>
>
< y_ 2 = y2
)
)
>
>
y_ 3 = y4 y3
>
>
>
>
>
: y_ 4 = y4
y_ 1
y_ 2
5=4
y1 + y2
5 and 4
y2
Solve
2
3the systems:
2
3
y_ 1
y1 + y2
4
5=4
5 ) y2 = k2 et , y_ 1 = y1 + y2 = y1 + k2 et )
y_ 2
y2
y_ 3
y_ 4
123
5=4
y4
y3
y4
t
6 y1 7 6 (k2 t + k1 ) e 7 6
7 6
6
7 6
7 6
6 y2 7 6
k2 et
7 6
6
7 6
Obtain the matrix form of the solution: 6
7=6
7=6
6 y 7 6 (k t + k ) e t 7 6
7 6
6 3 7 6 4
3
5 4
4
5 4
k4 e t
y4
2
3 2 1
3
3
1 32
0
6 y1 7 6 4
4
2 7 6 x1 7
6
7 6 1
6
7
1
1
1 7
6 y2 7 6
7 6 x2 7
7
6
7 6 4
4
4
4 76
because 6
7=6
6
7, we get:
3 1 7
6 y 7 6 1
7
6
7
0
6 3 7 6
7 6 x3 7
4 2 54
4
5 4 4
5
1
1 1
1
y4
x4
4
2
3 2
34 2 4 3 4
1 3 7 6 y1 7
6 x1 7 6 1 3
6
7 6
76
7
6 x2 7 6 1 2 1
7 6 y2 7
2
6
7 6
76
7
6
7=6
76
7, and the solution:
6 x 7 6 1 1
7
6
1 1 7 6 y3 7
6 3 7 6
7
4
5 4
54
5
x4
1 0 1
0
y4
2
32
32
3 2
3
t
t
1 3 76 e t e
0
0 7 6 k1 7
6 x1 7 6 1 3
6
7 6
76
76
7
6 x2 7 6 1 2 1
6
6
7
2 7
et
0
0 7
6
76 0
7 6 k2 7
7 6
6
76
76
7=6
7=
6 x 7 6 1 1
76 0
7
t
t 76
1
1
0
e
t
e
k
6 3 7 6
76
76 3 7
4
5 4
54
54
5
x4
1 0 1
0
0
0
0
e t
k4
2
3
t
t
t
t
t
t
te ) + k2 (3e + te ) k3 e 7
6 k1 e + k4 (3e
6
7
6 k1 et k4 (2e t te t ) + k2 (2et + tet ) + k3 e t 7
6
7
=6
7
6 k (et + tet ) + k et k e t + k (e t te t ) 7
6 2
7
1
3
4
4
5
k1 et + k3 e t + tk2 et + tk4 e t
et t et
0
et
e
0
t e
e
7 6 k1 7
76
7
7 6 k2 7
76
7
76
7;
76 k 7
76 3 7
54
5
k4
124
2. LINEAR TRANSFORMATIONS
2.6.40. Example (Two eigenvalues, one with multiplicity 2). Consider the linear operator U ( ) given by
2
3
1
1
1
6
7
6
7
the matrix A = 6 3
4
3 7 2 M3 3 (R) in the standard basis.
4
5
4
7
6
Find the Jordan canonical form and a Jordan basis.
The eigenvalues are the solutions of the characteristic equation: det (A
I3 ) = 0 )
2)2 =
1) (
0
For
1, the eigenset is V1 = fx 2 R3 ; (A
I3 ) x = 0g = f (0; 1; 1) ;
2 while the geometric dimension is 1, so that the linear operator is not diagonalizable.
6
6
Consider the linear operator N2 ( ) with the matrix in standard basis B2 = A 2I3 = 6
4
The kernel of N2 ( ) is ker N2 ( ) = V2 = f (1; 0; 1) ;
2
6
6
The linear operator N22 ( ) has the matrix B22 = 6
4
2 Rg.
2
2 Rg
32
0
6
7
6
7
3 7 = 6 9
4
5
9
4
1
33
0
18
18
0
7
0
7
7
3 7.
5
4
7
7
9 7 and the
5
9
0
6
7
7
6
7
7
The linear operator
( ) has the matrix
3
6
3 7 = 6 27
54
27 7 and its
4
5
5
4
7
4
27
54
27
3
2
2
stops
kernel is ker N2 ( ) = ker N2 ( ), so that the chain of kernels ker N2 ( ) ker N2 ( ) = ker N23 ( )
N23
B23
6
6
= 6
4
ker N2 ( )
ker N22 ( )
= ker N23 ( )
jj
ker N2 ( )
Choose a complement
Q22
3v 0
3N2 (v 0 )
2
Q2 of ker N2 (
2 Rg and a
6
6
C=6
4
7
6
7
7
6
7
1
1 0
1 7 and C = 6 1 0
1 7
5
4
5
1
1
2
1
1
2
The Jordan2 canonical form3is2
32
0
1
1
1
1
2
1
76
76
6
76
76
6
C 1 AC = 6 1
4
3 76 1
0
0 76 3
54
54
4
1
4
7
6
1
1
1
6
6
=6 1
4
1
1
2
0
1
2
125
7
7
0 7.
5
1
1 0 0
7
7 6
7
7 6
0
1 7=6 0 2 1 7
5
5 4
0 0 2
1
2
3
2
6 1 j 0 0 7
7
6
7
6
7
6
In the Jordan basis, the matrix is a Jordan matrix with two blocks 6
7
6 0 j 2 1 7
7
6
5
4
0 j 0 2
Remarks:
2
3
1
1
1
6
7
6
7
the linear operator N2 ( ) has the matrix 6 3
6
3 7
4
5
4
7
4
the linear operator N2 ( ) is not nilpotent;
the kernel ker N22 ( ) is N2 ( )invariant and has a basis f(1; 0; 1) ; (0; 1; 2)g, because:
2
32
3 2 3
0
1
1
1
1
6
76
7 6 7
6
76
7 6 7
6 3
6
3 7 6 0 7 = 6 0 7 = 0 (1; 0; 1) + 0 (0; 1; 2) and
4
54
5 4 5
0
4
7
4
1
2
32
3 2
3
1
1
1
1
0
6
76
7 6
7
6
76
7 6
7
=
6 3
7
6
7
6
6
3
1
0 7 = 1 (1; 0; 1) + 0 (0; 1; 2).
4
54
5 4
5
4
7
4
2
1
The restriction of the linear operator N2 ( ) over ker N22 ( ), N2r ( ) : ker N22 ( ) ! ker N22 ( ) has the
2
3
0 1
5 for the basis f(1; 0; 1) ; (0; 1; 2)g.
matrix 4
0 0
2
32 2
3
0
1
0
0
5 =4
5.
The linear operator N2r ( ) is nilpotent, because 4
0 0
0 0
126
2. LINEAR TRANSFORMATIONS
called symmetric.
V2 ! K a bilinear functional.
V1
i=1;n1
j=1;n2
AB (E1 ; E2 ).
Moreover, we have the matrix representation B (x; y) = [x]TE1 AB (E1 ; E2 ) [y]E2 .
2
6
6
6
4
n2
P
Proof. The vectors x 2 V1 and y 2 V2 are uniquely represented in the xed bases as [x]E1 =
2
3
3
!
1
1
6
7
7
n1
n2
P
P
6 . 7
.. 7
1
2
=
. 7, [y]E2 = 6 .. 7 and from the linearity in each variable we get: B (x; y) = B
i ei ;
j ej
4
5
5
i=1
j=1
n1
n2
n1
P
j=1
i=1
1 2
i ei ; ej
n1
P
i=1
n1
P
i=1
= [x]TE1
iB
1 2
i ei ; e1
i=1
n1
P
(e1i ; e21 )
n1
n1
P
iB
i=1
i6
6
6
4
1 2
i ei ; en2
e1i ; e2n2
6 1 7
6 .. 7
6 . 7=
4
5
n2
6 1 7
6 .. 7
6 . 7=
4
5
n2
(e11 ; e21 )
B e1n1 ; e21
n1 P
n2
P
AB (E1 ; E2 ) [y]E2 =
i=1 j=1
e11 ; e2n2
B e1n1 ; e2n2
i jB
e1i ; e2j .
32
76 1 7
7 6 .. 7
76 . 7 =
54
5
n2
The alternative representations are obviously dependent on the chosen bases. The unicity of the
representation comes from the unicity of the coordinates of a vector in a basis.
127
2.7.4. Remark. The bilinear functional may be viewed as a composition between the linear transformation
U ( ) : V2 ! Kn1 dened by U (y) = AB (E1 ; E2 ) [y]E2 and the linear functional fa ( ) : V1 ! K with xed
a 2Kn1 (fa ( ) 2 (V1 )0 ) dened by fa (x) = [x]TE1 a: B (x; y) = fU (y) (x).
2.7.5. Proposition. Consider the vector spaces (Vk ; K), k = 1; 2 and a bilinear functional B ( ; ) :
V1
V2 ! K. For the bases Ek (old basis) and Fk (new basis) over Vk , with k = 1; 2, we have
AB (F1 ; F2 ) = (M (F1 ))TE1 AB (E1 ; E2 ) (M (F2 ))E2 .
The matrix (M (Fk ))Ek denotes the changeofbasis matrix (the columns are the representations in
V!K
in a certain basis E is a symmetric matrix [meaning AB (E; E) = (AB (E; E))t the matrix equals its
transpose].
The converse statement is also true: if the representing matrix of a bilinear functional is symmetric,
then the bilinear functional is also symmetric.
2.7.8. Remark. We may attach to any bilinear functional B ( ; ) : V
1
functional Bs ( ; ) : V V ! K by Bs (x; y) = [B (x; y) + B (y; x)].
2
V ! K a symmetric bilinear
2.7.9. Denition. For a symmetric bilinear functional, the following set is called the kernel of the functional:
ker B ( ; ) = fx 2 V; B (x; y) = 0; 8y 2 Vg
When ker B ( ; ) = f0g the bilinear functional is called nondegenerated.
128
2. LINEAR TRANSFORMATIONS
2.7.11. Proposition. A symmetric bilinear functional is nondegenerated if and only if its attached matrix
is invertible.
Proof. Consider two arbitrary bases E1 ; E2 and a symmetric bilinear functional B ( ; ) over V.
Then B (x; y) = [x]TE1 AB (E1 ; E2 ) [y]E2 .
The matrix AB (E1 ; E2 ) is invertible if and only if the system [x]TE1 AB (E1 ; E2 ) = [0]E1 has only the
null solution (the system is Cramer with only the null solution).
T
If x0 2 V is a solution of the system [x0 ]E1 AB (E1 ; E2 ) = [0]E , then B (x0 ; y) = 0, 8y 2 V so that
x0 2 ker B ( ; ).
Conversely, if x0 2 ker B ( ; ), by using particular values for y we get that x0 is a solution of the system
[x]TE1 AB (E1 ; E2 ) = [0]E1 .
n
o
T
This means that ker B ( ; ) = x 2 V; [x]E1 AB (E1 ; E2 ) = [0]E1 , which concludes the proof.
2.7.12. Denition. A function Q ( ) : V ! K, dened over the vector space (V; K), for which there is a
symmetric bilinear functional B ( ; ) : V
1
[Q (x + y)
2
Q (x)
Q (y)], with B ( ; ) : V
V ! K, is
2.7.13. Remark. The association quadratic form symmetric bilinear functional given above is oneto
one.
The matrix form for quadratic forms is similar: Q (x) = [x]TE AQ (E) [x]E =
AQ (E) = AB (E; E), which is symmetric.
0; 8x 2 V.
0; 8x 2 V.
(5) Q ( ) is called undened when 9x; y 2 V; Q (x) > 0 and Q (y) < 0.
The following shapes should give some intuition.
n
P
i;j=1
B (ei ; ej ) xi xj where
50
40
30
20
10
-4
-2 0
0 0
2
-4
-2
2
x4
-4
-2
0
4 20 0 -2 -42
-10
-20
z
-30
-40
-50
(x; y) 7!
x2
y2
20
10
-4z -2 0
0 02
2
y -10 x
-2
-4
-20
20
10
-4
-2 0
0 0
2
2
-2
-4
4x
y2
129
130
2. LINEAR TRANSFORMATIONS
-4
-4
-2
-2
0
0 0
2
2
-10
x4
z4 y
-20
(x; y) 7!
x2
10
-4
-5
z 00 0
2
-10
4y
ellipsoid
-2
x5
x2 y 2 z 2
+
+
=9
12 22 32
x2 y 2 z 2
+ +
which is positive, since
12 22 32
10
-10
-10
z
y
10
0
0 0
-10
10
x2 y 2
+
12 22
z2
=9
32
131
10
-10
-10
0
0 0
z
y
-10
10
10
elliptic hyperboloid
x2
12
y2
22
z2
=9
32
When we manage to nd a certain basis F over V for which the matrix AQ (F ) representing the
quadratic form Q ( ) is a diagonal matrix then we say that the quadratic form is reduced to its canonical
form.
The previous characteristics are obtained from the canonical form, by considering the signs of the
elements of the diagonal on the matrix AQ (F ).
One problem is the existence of a basis under which the matrix is diagonal, which exists always.
Another problem is nding eectively such a basis.
There are several methods, from which we will present three.
The rst one is called "The Gauss procedure" and manipulates the algebraic form Q (x) =
n
P
ij i j ,
i;j=1
while the second one manipulates the matrix form Q (x) = [x]TE AQ (E) [x]E , and it is called "The Jacobi
procedure". The third one is called "The eigenvalues/eigenvectors procedure".
2.7.15. Theorem. (The Gauss procedure for nding a canonical basis) Consider Q ( ) : V ! R,
2
3
Q (x) =
n
P
ij i j
i;j=1
ij )i;j=1;n .
6 1 7
6 . 7
= [x]TE AQ (E) [x]E a quadratic form, where [x]E = 6 .. 7, dim V = n and AQ (E) =
4
5
n
Then there is a basis F over V such that the representation matrix of the quadratic form is diagonal.
Proof. Proof by induction with respect to the dimension of the vector space V.
When dim V = 1 the quadratic form is in canonical form, with F = E.
Consider the statement true for dim V = k
(2) 8i 2 f1;
132
2. LINEAR TRANSFORMATIONS
For the second case, there are two subcases: either the quadratic form is null or not, in which case
9i0 ; j0 2 f1;
For the rst subcase the matrix AQ (E) is null, the quadratic form is is in canonical form while F
E.
The second subcase reduces to the rst case by using the coordinate transformation:
8
>
>
>
<
>
>
>
:
i0
i0
j0
i0
1
7
.. 7
. 7
7
7
7
i0 7
.. 7
. 7
7=
7
7
j0 7
7
.. 7
. 7
5
k
j0
ith
0 col
1
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4
j0th row
j0
8i 2 f1;
i;
2
ith
0 row
; kg n fi0 ; j0 g
j0th col
0
...
1
..
.
..
.
7
7
7
7
7
7
1
7
..
.. 7
.
. 7
7
7
7
1
7
7
7
..
.
7
5
1
2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4
1
7
.. 7
. 7
7
7
7
i0 7
.. 7
. 7
7;
7
7
j0 7
7
.. 7
. 7
5
k
(M (E1 ))E = (
ij ) ;
ij
8
>
>
>
>
>
>
>
<
>
>
>
>
>
>
>
:
1; for i = j; i; j 2
= fj0 g
1; for i 6= j; i; j 2 fi0 ; j0 g
0; for the other places.
1; for i = j = j0
k+1
iT
, is dened by the
133
Apply the transformation to get some other coe cients in terms of s; the whole point of the trans2
i0 :
k
X
aij
i j
i;j=1
k
X
a0ij
i j,
i;j=1
By using the above transformation the second case is reduced to the rst case, with the basis E1 replacing
E (there are some other ways of achieving this goal)
For the rst case denote by i0 one of the indexes for which the coe cient ai0 i0 6= 0.
k
P
Then Q (x) =
aij i j =
[separate the
= ai 0 i 0
2
i0
i0
i;j=1
2
i0 term,
k
P
j=1;j6=i0
i;j=1;i;j6=i0
j=1;j6=i0
that i20 is
the rst term, the rst paranthesis is the middle term, while
j=1;j6=i0
ai 0 i 0
k
P
j=1;j6=i0
= ai 0 i 0
i0
ai0 j
ai0 i0 j
k
P
j=1;j6=i0
!2
ai0 j
ai0 i0 j
j=1;j6=i0
k
P
aij
i;j=1;i;j6=i0
!2
ai0 j
ai0 i0 j
i0 ]
i j
k
P
ai0 i ai0 j
ai0 i0
aij
i;j=1;i;j6=i0
>
:
!2 1
i0
k
P
j=1;j6=i0
i j.
ai0 j
ai0 i0 j
= i , for i 6= i0 .
(the determinant of the matrix for the transformation is nonzero so that the transformation is a change
of basis).
Then the quadratic form is
Q (x) = ai0 i0
2
i0
k
X
a0ij
i j;
i;j=1;i;j6=i0
which means that in the new basis the matrix attached to the quadratic form has the value ai0 i0 at place
(i0 ; i0 ) and the value 0 at all the other places on the line and column i0 .
134
2. LINEAR TRANSFORMATIONS
B
B
B
B
B
B
B
(M (E1 ))E = B
B
B
B
B
B
@
iT
0
..
.
1
..
.
0
. . ..
. .
0
..
.
ai0 1
ai0 i0
ai0 2
ai0 i0
..
.
1
.. . .
.
.
..
.
C
C
C
C
C
C
C
C;
C
C
C
C
C
A
ai0 ;k
ai0 i0
..
.
1-dimensional).
According to the induction hypothesis, since the subspace span (e1i )i=1;k;i6=i0
k
k
P
P
a00ii i2 .
there is a basis (fi )i=1;k;i6=i0 such that
a0ij i j =
has dimension k
1,
i=1;i6=i0
i;j=1;i;j6=i0
Q (x) =
k
X
a00ii
2
i
= [x]TF AQ (F ) [x]F
i=1
with [x]F =
iT
2.7.16. Example. Discuss (with respect to the parameter ) the nature of the quadratic form Q(x) =
2
1
+6
2
2
2
3
+3
+4
1 2
1 3.
+6
2.7.17. Solution. Use the Gauss method to obtain the canonical form:
2
1
Q(x) =
2
1
+6
+ 2 1 (2
2
2
+3
+3
2
3
+4
3)
+6
1 3
+3
2
3)
1 2
+ (2
=(
+2
+3
2
3)
2
2
=(
+2
+3
2
3)
+2
2
2
=(
+2
+3
2
3)
+ 2(
2
2
2 3
=(
+2
+3
2
3)
+ 2(
Q (x)
+ (3
(2
12
9 2)
2 3
2
3)
2
3
+9
+ 3 (1
0
nondef
semi
pos def
The change of basis is given by:
+3
2 3
1
3
1
1
2 2
3
+6
12
3)
2
2
2 3
2 2
3)
9 2)
pos def
2
2
+6
2
3
+3
+3
2
3
=
18
1
3
+++
2
3
2 2
3
+ (3
9 2)
)
+1
0
semi
pos def
nondef
2
3
8
>
>
>
<
>
>
>
:
+2
+3
6
6
() 6
4
7 6
7 6
2 7 = 6 0
5 4
0
3
2
3
6
6
[x]F = (M (F ))E [x]E , where [x]E = 6
4
2
1
0
32
76
76
76
54
3
2
7
6
7
6
7, [x]F = 6
5
4
1
1
1
2
3
135
7
7
7, or
5
7
6
7
6
1
2
2 7, and (M (F ))E = 6 0
5
4
0 0
3
3
because the determinant of (M (F ))E is nonzero.
3
1
7
7
7.
5
The basis E (the initial basis) is E = fe1 ; e2 ; e3 g while the basis F (the new basis) is F = ff1 ; f2 ; f3 g,
where f1 = e1 , f2 =
2e1 + e2 , f3 =
9 e1 + 3 e2 + e3 .
= 1 si
n
X
k=1
k 1 2
k;
k
6 1 7
6 . 7
unde [x]F = 6 .. 7.
4
5
n
1
[Q (x + y)
2
[x]TE AE [y]E .
Are loc: aij = B (ei ; ej ) = B (ej ; ei )
C
aut
am o baz
a F = (f1 ;
Dezvoltat, forma c
autat
a este (matricea atasat
a este triunghiular
a):
k
P
j=1
jk ej .
Q (x
136
2. LINEAR TRANSFORMATIONS
8
>
>
f1 =
>
>
>
>
>
>
>
f2 =
>
>
>
>
>
>
f3 =
>
>
>
>
<
11 e1 ;
12 e1
22 e2 ;
13 e1
23 e2
k
>
P
>
>
>
f
=
k
>
>
j=1
>
>
>
>
>
>
>
>
>
k
>
P
>
>
>
f
=
: n
33 e3 ;
jk ej
jn ej :
j=1
8
< B (ei ; fk ) = 0; pentru i = 1;
Pentru obtinerea formei canonice se xeaz
a conditiile:
: B (e ; f ) = 1
k
6 11
6
6 12
6
=6 .
6 ..
6
4
1n
0 7
6
7
6
6
0 7
7
6
A
E6
.. 7
7
6
. 7
6
5
4
22
..
nn
2n
Pentru k = 1:
B (e1 ; f1 ) = 1 ()
11 B
Pentru k = 2:
8
8
< B (e1 ; f2 ) = 0
<
)
: B (e ; f ) = 1
:
2
2
kk k .
(e1 ; e1 ) = 1 )
11
Am folosit
k=1
..
.
11
7 6
7 6
7 6
2n 7
6
=6
.. 7
7
. 7 6
6
5 4
12
0
..
.
1n
22
..
nn
(e1 ; e1 ) +
12 B
(e2 ; e1 ) + 22 B (e2 ; e2 ) = 1
2 6= 0 deci este Cramer
11
0
..
.
..
.
1
[B (e1 ; e1 ) =
B (e1 ; e1 )
12 B
22 B
1)
AF = (M (F ))TE AE (M (F ))E =
3
3 2
2
0
..
.
n
P
; (k
22
..
0 7
7
0 7
7
:
.. 7
7
. 7
5
nn
6= 0]
(e1 ; e2 ) = 0
ei ;
k
P
jk ej
j=1
ek ;
k
P
jk ej
j=1
= 0; i = 1; (k
1)
)
=1
j=1
6= 0. Coecientii
ik
Necunoscuta
ind
kk
137
se obtine din formulele Cramer ca ind raportul dintre doi determinanti, la numitor
iar la num
ator
3 determinantul obtinut din k prin nlocuirea ultimei coloane cu coloana terme2ar
6 0 7
6 . 7
6 .. 7
6 7
a ultima coloan
a se obtine determinantul
nilor liberi, adic
a 6 7 de dimensiune k. Prin dezvoltare dup
6 0 7
6 7
4 5
1
k
k 1,
asa c
a
kk
k 1
natura functionalei p
atratice Q(x) = x21 + 6x22 + 3x23 +
2.7.19. Example. S
a se discute dup
a parametrul
4x1 x2 + 6 x1 x3 .
1 2 3
6
7
6
7
a
2.7.20. Solution. Matricea functionalei p
atratice este: A = 6 2 6 0 7. Functionala biliniar
4
5
3 0 3
1
atasat
a este: B (x; y) = [Q (x + y) Q (x) Q (y)] = xt Ay [forma matricial
a] = x1 y1 + 6x2 y2 + 3x3 y3 +
2
2x1 y2 + 2x2 y1 + 3 x1 y3 + 3 x3 y1 [forma algebric
a] Baza initial
a este cea standard. Se caut
a noua baz
a
8
>
f = 11 e1 ;
>
>
< 1
de forma:
astfel nct pentru ecare k = 1; 2; 3 s
a e satisf
acut sistemul:
f2 = 12 e1 + 22 e2 ;
>
>
>
:
f3 = 13 e1 + 23 e2 + 33 e3 ;
8
< B (ei ; fk ) = 0; i = 1; (k 1)
Se aduce functionala p
atratic
a la forma canonic
a folosind Metoda Jacobi.
: B (e ; f ) = 1
k
= 1,
= 1,
1 2
2 3
Q(x) =
1 2
1 1
= 2,
9 2 ). Pentru
= 6 (1
2 6
poate aplicat
a. Pentru
0
B 11
B
(M (F ))E = B 0
@
0
12
13
22
23
33
2
=
1
1
3
1 2
2 2
2
6(1 9
2)
C
C
a satisfac
a urm
atoarele conditii:
C s
A
3
9
1
3(9
1)
2
3.
1
3
metoda nu
Baza F a fost c
autat
a astfel nct
138
2. LINEAR TRANSFORMATIONS
3
e1 + 12 e2 , f3 =
Alegem f1 = e1 , f2 =
e
1 1
e
1 2
1
3(9
e.
1) 3
Am notat cu E = (e1 ; e2 ; e3 )
1
este precizat
a n tabelul urm
ator: 1
1
3
+++
+1
Studiul complet
Q (x)
nedef ? poz def ? nedef
1
si se ia folosind alt
a metod
a (de exemplu Gauss).
3
p1
P
i=1
+ 2
i i
n form
a canonic
a n bazele F1 =
q1
P
j=1
ff11 ;
2
j
p2
P
i=1
+ 2
i i
; fn1 g si F2 =
q2
P
j=1
ff12 ;
2
j
dou
a scrieri ale functionalei p
atratice
nuli. Am impus aceste conditii pentru a nlesni scrierea. Conditiile pot usor ridicate.
[v]F2
6
6
6
6
6
6
6
6
6
6
6
6
=6
6
6
6
6
6
6
6
6
6
6
4
0
..
.
0
p2 +1
..
.
p2 +q2
p2 +q2 +1
..
.
n
7
7
7
7
7
7
7
7
7
7
7
7
7, deci are loc:
7
7
7
7
7
7
7
7
7
7
5
6
6
6
6
6
6
6
6
6
6
6
6
= 6
6
6
6
6
6
6
6
6
6
6
4
7
.. 7
. 7
7
7
7
p1 7
7
7
0 7
7
.. 7
. 7 iar
7
7
0 7
7
7
0 7
7
.. 7
. 7
7
5
0
1
Q (v) =
p1
P
i=1
Q (v) =
+ 2
i i
q2
P
j=1
0 si
2
j
+
i
0 si
> 0; 8i = 1; p1
j
9
>
>
=
>
> 0; 8j = 1; q2 >
;
139
n ) p1
ce nseamn
a c
a p1 = p2 . Egalitatea q1 = q2 se demonstreaz
a analog.
p2 ; analog urmeaz
a c
a p2
p1 , ceea
140
2. LINEAR TRANSFORMATIONS
2.7.22. Remark. When the real matrix A is symmetric (A = At ), its eigenvalues are real so its eigenvectors
are also with real coordinates.
Proof. Consider an eigenvalue 2 C and an associated eigenvector u (with complex coordinates).
2
3
2
3
z
z
6 1 7
6 1 7
6 . 7
6 . 7
If u = 6 .. 7 2 Cn , then u = 6 .. 7 (the vector corresponding to the complex conjugates of the
4
5
4
5
zn
zn
h
i
coordinates of u), ut = z1
zn
2
3
z1
7
h
i6
6 .. 7
t
+ zn zn , which is a real positive number.
while u u = z1
zn 6 . 7 = z1 z1 +
4
5
zn
We have Au = u and by transposing
From Au = u by transposing we get (Au)t = ( u)t ) ut At = ut ) ut A = ut
From ut A = ut by complex conjugation we get (ut A) = ( ut ) ) ut A = ut
We get:
ut Au = (ut A) u = ut u
and
ut Au = ut (Au) = ut ( u) = ut u
so that ut u = ut u
As ut u is a strictly positive real number, we get
I) u = 0 is a linear homogeneous system with unknown u and all co cients real numbers,
its solutions are also real numbers, which means that the coordinates of all the eigenvectors are real.
CHAPTER 3
V ! R is called (real)
hx; yi;
0 and hx; xi = 0 , x = 0.
A real vector space (V; R) together with a scalar product dened over it is called Euclidean space.
In this context, we have the following notions:
Not.
p
hx; xi;
the measure of the angle between two vectors x; y 2 V is the real number ^ (x; y) 2 [0; ] dened
by cos ^ (x; y) =
hx;yi
;
kxk kyk
two vectors x; y 2 V are called orthogonal when hx; yi = 0 and we denote this situation by x ? y.
3.0.24. Example. Over Rn the standard scalar product is hx; yi =
n
P
i=1
3.0.25. Example. For the vector space F (X) with X nite, the function hf ( ) ; g ( )i =
f (x) g (x) is
x2X
a scalar product [since X is nite, the sum is also nite and so there are no existence problems].
3.0.26. Remark. When V is a real vector space with an independent set of at least two vectors, then:
8x, y 2 V 9wx;y 2 V such that hwx;y ; wx;y i = 1 and hwx;y ; x
yi = 0.
[When the real vector space has dimension at least two, for each two (noncollinear) vectors there is at
least a vector orthogonal over their dierence]
Proof. Consider the independent set fa, bg V, and the vectors x; y 2 V.
a
When x = y, we have w = p
which satises the statement.
ha; ai
When x 6= y, consider the set f (x y) ; 2 Rg = span (x y), and z 2 V n span (x
vector exists since span (x
Find
hz +
v=z
y) ; x
(x
yi = 0 () hz; x
hz; x yi
(x
hx y; x yi
y) (such a
y) = z
y) satises hv; x
yi +
hz; x yi
(x
kx yk2
hx
y; x
y); we have:
yi = 0:
yi = 0 ()
hz; x yi
)
hx y; x yi
142
1
kvk2
v
1
y =
hv; x
2;x
kvk
kvk2
hz; x yi
hz; x yi
hx y; x
kx yk2
yi =
v
v
=
satises the statement:
hv; vi
kvk2
hz; x yi
(x y) ; x y =
kx yk2
y)) and w =
1
z
kvk2
1
=
(hz; x
kvk2
yi =
yi
yi
hz; x
yi) = 0.
3.0.27. Remark. A real scalar product over (V; R) is a symmetric bilinear functional such that the
attached quadratic form is strictly positive denite. For a xed real vector space such a real scalar
product may be chosen in dierent ways, leading to dierent geometric measurements such as the lenght
of a vector, the angle between two vectors, the distance between two vectors and others.
3.0.28. Denition. Fie V un spatiu vectorial complex. O aplicatie h ; i : V
V ! C se numeste produs
hx; yi;
0 si hx; xi = 0 , x = 0.
Dac
a pe spatiul vectorial complex V s-a denit un produs scalar complex atunci spunem c
a V este un
spatiu unitar.
3.0.29. Example. Pe Cn produsul scalar standard este hx; yi =
adjuncta hermitic
a1) [De fapt, hx; yi = [x]E [y]E ].
n
P
xi yi = x y, cu x = xT (adjuncta sau
i=1
1 = 0.
3.0.31. Example. Pe spatiul vectorial Mm n (K), cu K = R sau C, se poate deni produsul scalar Froben
m
P
P
nius: hA; Bi = T r (B A) =
alk blk [vezi Denitia 7.7.5 si demonstratia la Observatia 7.7.6]; acest
l=1
k=1
produs scalar
hx; xi
hy; yi = 2 hx; yi [sau hx + y; x + yi = hx; xi + 2 hx; yi + hy; yi] Din hermiticsimetria si aditivitatea produsului scalar complex se obtine: hx + y; x + yi
hx; xi
"matrice adjunct
a" se poate referi la dou
a situatii care nu au leg
atur
a ntre ele: adjunct
a n sens de "matrice
transpus
a de conjugate complexe" sau "transpusa matricii cofactorilor" (care apare la inversarea unei matrici p
atratice) (vezi
si denitiile 7.5.5 si 7.5.6). Este o situatie nefericit
a n care aceeasi denumire este folosit
a pentru dou
a situatii distincte, iar
distinctia va trebui f
acut
a din context.
143
hx; xi
hx; xi
hx; xi
hy; yi =
yk2 = kxk2
2 hx; yi + kyk2 .
kyk2 .
Doi vectori x; y 2 V se numesc ortogonali dac
a hx; yi = 0 si not
am acest fapt prin x ? y.
3.0.33. Remark. Produsul scalar complex are n a doua variabil
a propriet
atile: hx; y + zi = hy + z; xi =
hy; xi + hz; xi = hy; xi+hz; xi = hx; yi+hx; zi [aditivitate] hx; yi = h y; xi =
hy; xi = hy; xi =
hx; yi
"sesqui" se refer
a la "o dat
a si jum
atate" sau
144
hx; xi
hy; yi = 2 hx; yi
v = 0 () u = v.
3
32
2
x
0 1
5 4 1 5 are propri3.0.38. Remark. n R2 cu produsul scalar standard, operatorul U (x) = 4
x2
1 0
etatea: hU (x) ; xi = h(x2 ; x1 ) ; (x1 ; x2 )i = 0.
3.0.39. Theorem.
v si se obtine hu
v; u
vi = 0 () u
(1) ntrun spatiu vectorial cu produs scalar (real sau complex), pentru un op-
2 C. Atunci:
hU (x) ; yi +
hU (y) ; xi
pentru
= 1, hU (x) ; yi + hU (y) ; xi = 0
pentru
= i, i hU (x) ; yi
i hU (y) ; xi = 0 ) hU (x) ; yi
hU (y) ; xi = 0
0; 8x 2 V; kxk = 0 , x = 0.
145
(3) kx
2 Re hx; yi, iar pentru x ? y avem kx + yk2 = kxk2 + kyk2 (teorema lui
Pitagora).
Proof. 1. Evident inegalitatea CauchyBuniakovski este adev
arat
a pentru hx; yi = 0.
Pentru un spatiu euclidian:
hx + y; x + yi
= 4 hx; yi2
hx; xi hy; yi
c
a x0 +
0 y0
hy; yi
0; 8 2 R; 8x; y 2 V
0; 8x; y 2 V.
Se observ
a c
a dac
a x0 si y0 sunt astfel nct
0
hx0 ; y0 i +
2
0
hy0 ; y0 i = 0, adic
a hx0 +
0 y0 ; x0
0 y0 i
= 0, adic
a vectorii x0 si y0 sunt liniar dependenti.
hx + y; x + yi
Se alege
)
hx;yi
= t jhx;yij
, cu t 2 R arbitrar ) hx; xi + 2t jhx; yij + t2 hy; yi
= 4 jhx; yij2
hx; xi hy; yi
hx0 +
0 y0 ; x0
0 y0 i
0; 8t 2 R; 8x; y 2 V
0; 8x; y 2 V.
Se observ
a c
a dac
a x0 si y0 sunt astfel nct
dubl
a t0 si pentru
0; 8 2 C; 8x; y 2 V
hx;yi
= t0 jhx;yij
deci are loc: hx0 ; x0 i +
= 0 de unde rezult
a c
a x0 +
0 y0
hx0 ; y0 i +
0 hx0 ; y0 i
2
0
hy0 ; y0 i = 0, adic
a
= 0, adic
a vectorii x0 si y0 sunt liniar dependenti.
hx; yi + hy; yi
2. Functionala p
atratic
a atasat
a formei biliniare ce deneste produsul scalar este kxk =
Trebuie vericate propriet
atile:
a. kxk
p
hx; xi:
0; 8x 2 V; kxk = 0 , x = 0.
b. k xk = j j kxk ; 8x 2 V; 8 - scalar.
c. kx + yk
Primele dou
a conditii sunt imediate. Folosind hx; yi + hy; xi = 2 Re hx; yi
2 jhx; yij
2 kxk kyk,
2 [ 1; 1] :
146
<x;y>
kxkkyk
si kx
yk2 = hx
y; x
yi = kxk2
2 hx; yi + kyk2 )
yk2 = hx + y; x + yi + hx
y; x
yi =
yk () kx + yk2 = kx
yk2 () kxk2
yk.
kyk2 () 4 hx; yi = 0 () x ? y.
3.0.45. Remark. Pentru p 2 [1; 1), functiile kxkp =
n
P
k=1
1=p
jxk jp
1
kx + yk2
4
kx
yk2 ;
Dac
a V este un spatiu vectorial complex normat iar norma provine de la un produs scalar, atunci produsul
scalar (complex) de la care provine norma este:
hx; yi =
1
kx + yk2
4
kx
yk2 + i kx + iyk2
y) () hx + y; x
i kx
iyk2 :
yi = 0 () kxk2
hx; xi hx; yi
hy; xi hy; yi
3.1. ORTOGONALITY
147
3.0.49. Denition. Pentru orice doi vectori se numeste distanta dintre vectori lungimea diferentei:
d (x; y) = kx
yk :
0; d (x; y) = 0 , x = y.
km
xk = ky
(m
y)k2 = k(x
m) + (m
y)k2 + k(x
m) si (m
y)]
atunci m =
mk =
yk ;
x+y
.
2
Proof. kx
= 2 kx
) k(x
yk2 + k(x
mk2 + km
m)
m)
yk2 = kx
(m
y)k2 =
yk2
y)k2 = 0 ) x
(m
m)
y )m=
m=m
x+y
.
2
3.1. Ortogonality
Ne vom limita prezentarea la spatii euclidiene.
3.1.1. Remark. Dac
a vectorii (xi )i=1;k sunt ortogonali doi cte doi, atunci are loc Teorema lui Pitagora
generalizat
a:
k
X
xi
i=1
Proof.
k
P
xi
i=1
k
P
i=1
xi ;
k
P
j=1
xj
k P
k
P
i=1 j=1
k
X
i=1
kxi k2 :
hxi ; xj i =
k
P
i=1
hxi ; xi i =
k
P
i=1
kxi k2 .
1,
2
i xi
k
X
i=1
j i j2 kxi k2 :
3.1.3. Remark. Orice familie de vectori nenuli si ortogonali doi cte doi este liniar independent
a.
3.1.4. Denition. O familie de vectori fvi gi2I dintrun spatiu euclidian se numeste familie ortogonal
a
dac
a hvi ; vj i = 0, 8i 6= j; i; j 2 I. Dac
a, n plus, kvi k = 1, 8i 2 I familia se numeste ortonormat
a. O baz
a
se numeste ortonormala dac
a este familie ortonormat
a.
148
hx; vk i
; Coordokvk k2
dim A si V = A
A? ;
(5) Dac
a A si B sunt subspatii vectoriale ale lui V pentru care dim A + dim B = dim V si A ? B
atunci V = A
B; B = A? si A = B ? ;
(6) Dac
a A este subspatiu vectorial a lui V atunci A?
(7) A
B ) B?
Proof.
= A; n general, A?
= span (A);
A? si A? = (span (A))? .
(1) 9x 2 A \ B ) hx; xi = 0 ) x = 0:
P
(2) ( Fie x 2 span fai ji 2 Ig arbitrar. Atunci x =
time nit
a de
j aj (unde J este o mul
j2J I
P
indici) si hx; yi =
tine prin particularizarea vectorului din
j haj ; yi = 0, 8y 2 B: ) Se ob
j2J I
si y =
j bj
j bj
j2J2 -nit
a
j2J1 -nit
a
Sistemul
3.1. ORTOGONALITY
149
dim A.
(5) Folosind prima armatie avem A \ B = f0g si din dim A + dim B = dim V rezult
a c
aV=A
Deoarece B este un subspatiu vectorial a lui A? cu dim B = dim V
B:
B = A? :
(6) Este o consecinta a punctelor precedente.
(7) Este o consecinta a punctelor precedente.
; vk ). Exist
a o baz
a F = (fi )i=1;n cu propriet
atile:
; fk ) = Vk :
1g ; fk+1 ? Vk :
2
1 f1
+ v2 i = 0 )
2
1
hf1 ;v2 i
hf1 ;f1 i
2
1
hf1 ; f2 i = 0 )
hf1 ; f1 i + hf1 ; v2 i = 0
) f2 = v2
hf1 ;v2 i
f
kf1 k2 1
= v2
hv1 ;v2 i
v
kv1 k2 1
c
a norma sa este nenul
a]
f3 este un vector din V38ortogonal cu f1 si 8
cu f2 :
< hf1 ; f3 i = 0
< hf1 ;
f3 = 13 f1 + 23 f2 + v3 ;
)
: hf ; f i = 0
: hf ;
2 3
2
8
=0
>
8
z }| {
>
>
< hf1 ; f1 i 13 + hf1 ; f2 i 23 + hf1 ; v3 i = 0
<
)
)
>
:
>
hf2 ; f1 i 13 + hf2 ; f2 i 23 + hf2 ; v3 i = 0
>
: | {z }
=0
2
1 f1 + v2 ;
3
1 f1
3
2 f2
+ v3 i = 0
3
1 f1
3
2 f2
+ v3 i = 0
3
1
3
2
hf1 ;v3 i
kf1 k2
hf2 ;v3 i
kf2 k2
150
; vi 1 , urmeaz
a c
a are loc
kvi k.
Inductia se organizeaz
a dup
a dimensiunea spatiului.
Dac
a E = (e) este o baz
a a spatiului euclidian V, cu e 6= 0, atunci evident F = E este un sistem
ortogonal.
Pentru k 2 f1; : : : ; n
1g arbitrar, dac
a E = (e1 ; e2 ; : : : ; ek+1 ) este o baz
a a spatiului euclidian V
atunci vom ar
ata c
a exist
a F = ff1 ; f2 ; : : : ; fk+1 g o baz
a ortogonal
a a spatiului V.
Conform ipotezei de inductie exist
a ff1 ; f2 ; : : : ; fk g sistem ortogonal, format cu vectori nenuli, pentru
care span fe1 ; e2 ; : : : ; ek g = span ff1 ; f2 ; : : : ; fk g.
k
P
Fie fk+1 = ek+1
"j fj , unde scalarii "1 ; : : : ; "n se determin
a din conditiile hfk+1 ; fj i = 0, 8j = 1; k.
j=1
n
P
hek+1 ;fj i
j=1
hfj ;fj i
fj .
Evident F = ff1 ; f2 ; : : : ; fk+1 g este un sistem ortogonal, format cu vectori nenuli. Rezult
a F este liniar
independent. Familia F este si sistem de generatori pentru V (span (F ) = V = span (E)).
Mai mult, pentru c
a pentru ecare i 2 f2; : : : ; ng, fi si ei sunt n pozitia de perpendicular
a, respectiv
oblic
a fata de subspatiul generat de e1 ;
; ei 1 ; urmeaz
a c
a are loc kfi k
kei k.
; fk o baz
a notat
a B0 a lui V0 si v un vector oarecare, n general
>
: Pr v 2 V0
V0
? V0
8
>
>
<
Pr v; fj
V0
k
P
>
>
: Pr v =
V0
i=1
i fi :
= 0; 8j = 1; k
PrV0 v. Se cere s
a se
Se aa coecientii
a e ortogonal pe V0 , adic
a pe ecare dintre fi :
Pr v s
v
v
k
P
Pr v; fj
V0
k
P
151
V0
= 0; 8j = 1; k )
= 0; 8j = 1; k )
i fi ; fj
i=1
hfi ; fj i = hv; fj i ; 8j = 1; k:
i=1
C
C
C
C
C:
C
C
A
Aceast
a matrice este nesingular
a pentru c
a reprezint
a matricea n baza f1 ;
; fk a restrictiei functionalei
Pr v
V0
Vectorul v
B0
3
^1
6
7
6 . 7
= 6 .. 7 (2 V0 ) ; Pr v
V0
4
5
^k
=
E
k
X
^ i [fi ]E (2 V0
V) :
i=1
Pr v
V0
=
E
k
X
^ i [fi ]E
[v]E :
i=1
kvk = Pr v
V0
pentru c
a vectorii Pr v si v
V0
+ v
Pr v
V0
Pr v sunt perpendiculari.
V0
152
Pr v este ortogonal pe V0
v0 , cu v0 2 V0 . Din faptul c
av
V0
urmeaz
a c
a este ortogonal pe ecare vector din V0 , deci pe v0 . Atunci au loc relatiile:
v
v0 =
Pr v + Pr v
V0
V0
v0 2 V0
Pr v
V0
Pr v
V0
v0
Pr v
V0
v0
) v
9
>
>
>
>
>
=
2
T. P.
v0 k2 = Pr v
) kv
>
>
>
>
>
;
V0
kv0
Pr v
V0
v0
+ v
Pr v
V0
vk
adic
a lungimea oric
arei oblice este mai mare dect lungimea perpendicularei
Proof. Evalu
am min f (u). Dac
a v 2 V0 , atunci minimul este nul, este atins chiar n v si este unic;
u2V0
pentru v 2
= V0 si u 2 V0 , u
v este o oblic
a din v si deci este mai lung
a dect perpendiculara din v
ku
vk
Pr v
V0
v ;
deci
8u 2 V0 ; f (u)
Pr v
V0
asa c
a
min f (u) = f
u2V0
Pr v
V0
= kuk ;
adic
a pentru v xat si u 2 V0 , valoarea minim
a a expresiei ku
Pr v .
V0
153
Estim
ari
variabila "r
aspuns/dependent
a/prezis
a"
Y^ = Xi
num
arul de variabile explicative [coloane ale matricii X]
X j o coloan
a [contine n observatii pentru aceeasi variabil
a]
n
num
arul de observatii
Xi
xij
6 1 7
6 . 7
= 6 .. 7
4
5
6
7
.. 7
^=6
6 . 7
4
5
^p
Observatia R
aspunsul
^1
"^
"^i = yi
Xi
Predictori
Num
arul
termenul liber X 1
y1
y2
3
..
.
n
X2
Xp
x11
x12
x1p
x21
x22
x2p
y3
..
.
1
..
.
x31
..
.
x32
..
.
x3p
..
.
yn
xn1
xn2
..
.
xnp
Se ncearc
a explicarea r
aspunsului observat n termeni de predictori observati3; pentru asta se foloseste
o matrice X [descris
a mai sus] care contine observatiile. Desi liniile si coloanele matricii X sunt vectori,
semnicatiile lor sunt distincte, asa c
a aceleasi operatii algebrice vor nsemna altceva.
3 Trebuie
notat
a diferenta dintre "ceea ce se observ
a c
a se ntmpl
a" si "ceea ce se ntmpl
a" este o diferenta conceptual
a
care se pare c
a a fost inclus
a n modelare pentru prima dat
a n cadrul Mecanicii Cuantice [diferenta dintre realitate si
observarea realit
atii]
154
x1j 7
7
x2j 7
7
7
3.3.1. Operations with Columns. Coloanele X j
x3j 7
7, pentru ecare j, contin instante
.. 7
7
. 7
5
xnj
diferite ale aceluiasi obiect (valori diferite de acelasi tip si m
asurate la fel); sunt realiz
ari diferite ale
6
6
6
6
6
= 6
6
6
6
6
4
Sc
n 1
1
6 7 h
6 . 7
iT = 6 .. 7
1
1 n
4 5
1
1 7
6 1
6
7
..
7 [matrice p
=6
atratic
a de dimensiune n cu elemente 1]
.
1
1
6
7
4
5
1
1
n n de 1
n
1P
xij [scalar, media vectorului
Xj =
n i=1
n
P
1 T
xij = iT X j = nX j ) X j =
i
n
i=1
2
3
X
6 j 7
1 T
6 .. 7
i Xj =
6 . 7 = i Xj = i
n
4
5
Xj
component
a]
2
3
6 x1j X j 7
6
7
6 x2j X j 7
6
7
6
7
6 x
7
i X j = In
X
j 7 = Xj
6 3j
6
7
..
6
7
.
6
7
4
5
xnj X j
la medie ale vectorului X j ]
[poate scris si ca X j
155
X j]
X j si iT X j = nX j
1
i iT
n
1
i iT X j = M 0 X j [vectorul ndimensional al deviatiilor de
n
X j dac
a se foloseste o conventie folosit
a n diverse limbaje de programare,
si anume c
a adunarea unui vector cu un scalar nseamn
a adunarea scalarului la ecare component
a a
vectorului]
1 T
i i transform
a un vector ntrun nou vector ale c
arui componente sunt vechile
n
componente din care se scade media vechilor componente) [operatorul de deviatie de la medie]
Matricea M 0 = In
Propriet
ati ale operatiilor cu i si cu M 0 :
M 0 este
sumelor de p
atrate ale deviatiilor
2 folosit mai ales n calcularea
3
1
1
6 1 n
n 7
6
1
1
1 7
.
7 [elementele pe diagonal
..
M0 = 6
a sunt 1
, iar n afara diagonalei sunt
6
7
n
n
n 5
4
1
1
1
n
n
n n
1
]
n
det M 0 = 0 [de exemplu se adun
a toate liniile la linia 1; se obtine linia 1 nul
a] asa c
a rank (M 0 ) < n
T
156
M 0 M 0 = M 0 (M 0 este idempotent
a)
Dem:
M0 M0 =
1
i iT
n
In
1
1
i iT
i iT
n
n
1
1
= In
i iT
i iT
n
n
1
1
= In
i iT
i iT
n
n
n
P
Pentru coloana j:
xij
= In
n
P
Dem:
i=1
xij
Xj
1
i iT =
n
1
1
i iT
i iT =
n
n
In
1
i iT i iT =
n2
=n
1
+ i iT = M 0 :
n
2
X j = X Tj M 0 X j
+
= Xj
Xj
X j = (M 0 X j )
xij1
X j1
X j2 = X j1
X j1
Xj
i=1
(M 0 X j ) = X Tj M 0 M 0
X j = X Tj M 0 X j
Pentru dou
a coloane distincte j1 si j2 :
n
P
Dem:
n
P
xij1
i=1
xij1
X j1
xij2
X j2 = X Tj1 M 0 X j2
X j2
X j2 = (M 0 X j1 ) (M 0 X j2 ) =
i=1
X Tj1
M 0 X j2 = X Tj1 M 0 X j2
h
Dem: M 0 X = M 0
X1 X2
h
= X1 X1 X2 X2
Xp
Xp
Xp
M0 X 1 M0 X 2
M0 X p
Premultiplicarea cu M 0 micsoreaz
a rangul initial al matricii X, deoarece rank (M 0 X)
min (rank (M 0
(M 0 X)
1 xi1 xi2
xip
(pot considerate c
a
157
1 7
xi1
xi2
xip 7
6 1
7
6
7
6 xi1 x2
7
xi1 7
x
x
x
x
i1
i2
i1
ip
i1
7 h
6
7
i
7
6
7
T
7
6
2
Xi
xi2 xip 7
xi2 7
1 xi1 xi2
xip = 6 xi2 xi2 xi1 xi2
7
7
6 .
.
.
.
.
.. 7
7
6 .
7
.
.
.
.
.
.
.
.
.
. 7
6
7
5
4
5
2
xip xip xi1 xip xi2
xip
xip
2
3
3
x1p 7 2
6 1 x11 x12
6
7 6 X1 7
6 1 x21 x22
7 6
x
7
2p 7
6
7
6
7 6
X
2
6
7
7
matricea de observatii X = 6
=
a de linii);
x3p 7 6 . 7 (coloan
6 1 x31 x32
6
7
.
6 . .
7
. 7
..
..
.. 7 6
6 . .
5
.
.
. 7 4
6 . .
4
5
Xn
1 xn1 xn2
xnp
h
i
X T = X1T X2T
XnT
2
3
6 X1 7
6
7
h
i 6 X2 7 P
n
6
7
X T Xi
X T X = X1T X2T
XnT 6 . 7 =
6 .. 7 i=1 i
6
7
4
5
Xn
dim (Span fX j ; j = 0; pg) [ Sc ] = dim (Span fXi ; i = 1; ng) [ Sl ]
i
h
o observatie complet
a este linia Oi = yi Xi ;
2
2
2
3
3
3
6 y1 7
6 0 7
6 "1 7
6
6
6
7
7
7
6 y2 7
6 1 7
6 "2 7
6
6
6
7
7
7
6
6
6
7
7
7
7, = 6
7, " = 6 " 7 si:
Y =6
y
6 3 7
6 2 7
6 3 7
6 . 7
6 . 7
6 . 7
6 . 7
6 . 7
6 . 7
6 . 7
6 . 7
6 . 7
4
4
4
5
5
5
yn
"n
p
h
i
Y = 0 1 + 1X 1 + 2X 2 +
+ pX p + " = X 1 X 2
+"=
Xp
2
3
2
3
6 X1 7
6 X1
7
6
7
6
7
6 X2 7
6 X2
7
6
7
6
7
=X
+"=6 . 7
+"=6
7+"
..
6 .. 7
6
7
.
6
7
6
7
4
5
4
5
Xn
Xn
Y X =2"
3
h
i
1
5="
Y X 4
6
6
6
6
6
Xi = 6
6
6
6
6
4
Cnd modelul este privit din perspectiva coloanelor, se obtine cte o unitate de m
asur
a distinct
a pentru
ecare
j,
asa c
a interpretarea coecientilor
X )T (Y
X T (Y
X ) = Y TY
Y TX
XT Y +
158
produsul Y T
= Y TY
(Y
X =
n (p+1)
yi
2 f0; 1g
(Y
i 6
6
6
1 Xi
4
0
B
B
X ) = BO
@
= 1:
X )T (Y
XT Y
(p+1) 1
dimensiune
1}
|
{z
2Y T X + T X T X
2
X )T (Y
Pentru
1 n
7
7
7 = [Oi ]i=1;n
5
6
6
6
4
3T
7C
7C
7C
5A
7
7
1 7
5
6
6
X )=6
4
31T
OT
2
6
6
6
4
B
B
BO
@
2
6
6
O 6
4
1
0
3
7
7
7
5
6
6
6
4
31
7C
6
7C
6
7C = 6
5A
4
3T
1
7
7 6
7
7 6
1 7=6 1 7
5
5 4
OT
3T
7
7
7
5
OT
6
6
O 6
4
7
7
7, cu
5
1
7
6
7
6
O 6 1 7
5
4
Se rezolv
a problema de minimizare: min E ( ) = E ^ (iar ^ este solutia problemei)
^ este astfel nct e ? X
^T X T X ^ = 0 ()
Y TX
^ ()
^T X T X ^ = 0
O formul
a de update [[20], (A
XT X
T
X(n+1)
X(n+1)
^ ()
X^
X ^ = 0 () Y T X ^
= XT X
i
x1j x2j x3j
xnj
2
3 2
T
1
1
1
6
7 6 1
6 T 7 6
6 X 7 6 x11 x21 x31
6 1 7 6
6
7 6
T
T 7 = 6
X =6
X
6 2 7 6 x12 x22 x32
6 . 7 6 .
..
..
6 . 7 6 .
.
.
6 . 7 6 .
4
5 4
X Tp
x1p x2p x3p
X Tj =
X^ ? X
^T X T X = 0 () X T Y = X T X ^
O posibil
a solutie este Y T X
X(n+1) .
h
XT X
..
.
"
3
1 7
7
xn1 7
7
7
xn2 7
7,
.. 7
7
. 7
5
xnp
X(n+1)
1
(X T X)
T
X(n+1)
XT X
T
X(n+1)
X(n+1)
32
XT
(p+1) n
(p+1)
1
1
6 1
6
6 x11 x21 x31
6
6
X =6
6 x12 x22 x32
n (p+1)
6 .
..
..
6 .
(p+1)
.
.
6 .
4
x1p x2p x3p
6 1
6 T
6 X
6 1
6
T
=6
6 X2
6 .
6 .
6 .
4
X Tp
..
.
7
7
7
7 h
7
7
1 X1 X2
7
7
7
7
5
^ = XT X
XnT
6 X1
6
6 X2
i6
6
6 X
6 3
6 .
6 .
6 .
4
Xn
1 x31 x32
.. ..
..
. .
.
1 xn1 xn2
7
7
7
7
n
7 P
7=
XiT Xi
7
7 i=1
7
7
5
n
P
xi1
n
P
xi2
x1p 7 6
i=1
i=1
6 n
n
n
P
P
P
7 6
2
6
x
x
xi1 xi2
7
i1
i1
x2p 7 6 i=1
i=1
i=1
6
7 6 P
n
n
n
P
P
xi2
xi2 xi1
x2i2
x3p 7
7=6
6 i=1
7
i=1
i=1
..
.. 7 6
..
..
..
.
. 7 6
.
.
.
5 6
6 n
n
n
P
P
P
4
xnp
xip
xip xi1
xip xi2
i=1
i=1
i=1
3
T
T
1 X2
1 Xp 7
7
X T1 X p 7
X T1 X 2
7
7
T
T
X2 X2
X2 Xp 7
7=
7
..
..
..
7
.
.
.
7
5
X Tp X 2
X Tp X p
X T Y (n ipoteza c
a matricea X T X este inversabil
a)
Ranguri:
Exemplu:
1 x21 x22
T
1T X 1
6 1 1
6 T
6 X 1 XT X 1
1
1
i 6
6
6
T
T
= 6 X2 1 X2 X1
6
..
..
6
.
.
6
4
X Tp 1 X Tp X 1
3
rank (AB)
1 x11 x12
Xp
1 76
76
6
xn1 7
76
76
6
xn2 7
76
7
.. 7 6
6
. 76
54
xnp
159
160
6
6
6
6
6
6
6
6
6
6
6
6
X=6
6
6
6
6
6
6
6
6
6
6
4
2
1972
737:1
1973
812:0
1974
808:1
1975
976:4
1976 1084:3
1977 1204:4
1978 1346:5
1979 1507:2
1980 1667:2
3T
1185:9 7
7
1326:4 7
7
7
1434:2 7
7
7
7
1549:2 7
7
7
1718:0 7 =
7
7
1918:3 7
7
7
2163:9 7
7
7
2417:8 7
7
5
2633:1
1972
1973
1974
1975
1976
1977
1978
1979
1980
6
7
6
7
= 6 737:1 812:0 808:1 976:4 1084:3 1204:4 1346:5 1507:2 1667:2 7
4
5
1185:9 1326:4 1434:2 1549:2 1718:0 1918:3 2163:9 2417:8 2633:1
2
3
35141 244
2:005 107 3:231 2 107
6
7
6
7
T
7
7
7
X X = 6 2:005 10 1:2300 10 1:9744 10 7
4
5
7
7
7
3:2312 10 1:9744 10 3:1715 10
2
3
35141 244
2:005 107 3:231 2 107
6
7
6
7
det 6 2:005 107 1:2300 107 1:9744 107 7 = 4:6064 1017 6= 0 (inversabil
a)
4
5
3:2312 107 1:9744 107 3:1715 107
3 1 2
2
5:8389 10 7
4:5206 10 6
35141 244
2:005 107 3:231 2 107
6
7
6
6
7
6
6 2:005 107 1:2300 107 1:9744 107 7 = 6 4:5206 10 6
1:5291 10 4
5
4
4
3:4091 10 6
9:9802 10 5
3:2312 107 1:9744 107 3:1715 107
2
3
6 4:50 7
6
7
6 6:44 7
6
7
6
7
6 7:83 7
7
6
6
7
6
7
6 6:25 7
6
7
6
7
Y = 6 5:50 7
6
7
6
7
6 5:46 7
6
7
6
7
6 7:46 7
6
7
6
7
6 10:28 7
6
7
4
5
11:77
3:4091
9:9802
6:5636
10
10
10
7
7
7
5
= XT X
Semnicatie:
6:9004
6
6
XT Y = 6
4
2:943 5
2:2791
10
10
10
161
3
7
7
7
5
variabila explicat
a (Discount Rate) este explicat
a folosind variabilele explicative Year, Consumption,
GNP;
Se obtine "explicatia":
DiscountRate = ( 6:9004
10 4 ) Y ear + ( 2:943 5
10 2 ) Consumption + (2:2791
10 2 ) GN P
[Nu se discut
a adecvarea economic
a, calitatea explic
arii, etc]
3.4. Special Types of Operators
3.4.1. Projection Operators.
3.4.1. Denition. Fie V = V1
p2 (v) = p (v)
p (v) = 0 deci v
ker p ( ).
p (v)) =
direct
a pentru c
a v 2 p (V) \ ker p ( ) ) 0 = p (v) si 9u, v = p (u) ) 0 = p (v) = p2 (u) = p (u) = v deci
p (V ) \ ker p ( ) = f0g
3.4.4. Proposition. Fie p ( ) : V ! V un operator de proiectie. Atunci:
(1) x 2 Im p ( ) = p (V) () x este nul sau este vector propriu corespunz
ator valorii proprii 1;
(2) x 2 ker p ( ) () x este nul sau este vector propriu corespunz
ator valorii proprii 0; [structura
spectral
a a unei proiectii este: valorile proprii sunt numai 0 si 1 (cu diverse grade de multiplicitate)
iar vectorii proprii sunt vectorii din imaginea operatorului (pentru valoarea proprie 1) si din nucleu
(pentru valoarea proprie 0)]
Proof. Din Teorema anterioar
a se stie c
a
x = 0 ) p (0) = 0 ) 0 2 p (V)
0 6= x 2 p (V) () p (x) = p (p (x)) ) p (x) 2 p (V)
p (x) = 0 sau p (x) 6= 0.
162
p) ( ) = Im p ( ) si Im (1V
p) ( ) ) x
p) ( ) = ker p ( ).
p (x) = 0 ) x = p (x) 2 Im p ( ).
p) ( ) ) 9y, x = (1V
p) (y) = y
p (p (y)) = p (y)
p) ( )
p (y) = 0 )
p (x) = 0 ) x 2 ker p ( ).
x 2 ker p ( ) ) p (x) = 0 ) x = x
p (x) = (1V
p) (x) ) x 2 Im (1V
p) ( ).
0 0
0 0
0 0
32
54
32
x2
3
5 = 4
3 2
5 4
0 0
32
54
0 0
5 = 4
54
x1
x2
5, asa c
a
x2 + x 1
1
1
1
32
3
2
32
3
0 0
0 0
x1
0 0
x1
0 0
x1
54
54
5 = 4
54
5 = p (x). Nucleul: 4
54
5 =
p (p (x)) = 4
1
1
x2
1
x2
1
x2
2 3
2
32
3
2
3
0
0 0
x
y
4 5 ) x1 + x2 = 0 ) ker p ( ) = span f(1; )g Imaginea: 4
5 4 1 5 = 4 1 5 ) y1 = 0
0
1
x2
y2
2
32 3 2 3
0 0
0
0
5 4 5 = 4 5 (proiectia
(conditia de compatibilitate) ) Im p ( ) = span f(0; 1)g p ((0; )) = 4
1
nu schimb
a vectorii din imagine) proiectia p ( ) este ortogonal
a ()
= 0.
2
1
32
x1
32
3
5
163
164
Propriet
ati ale matricii adjuncte A :
3.4.10. Remark.
(AB) = B A
(A ) = A
Dac
a A este p
atratic
a, atunci: det A = det A
[determinantul matricii adjuncte este conjugatul complex al determinantului matricii]
(A 1 ) = (A )
[dac
a A este inversabil
a, atunci si A este inversabil
a,
U1 ) ( )
(U ) ( ) = U ( )
(U
) ( ) = (U )
( ) [dac
a U ( ) este inversabil, atunci si U ( ) este inversabil
Proof. Se observ
a mai nti c
a toate multimile care apar sunt subspatii vectoriale ale spatiilor corespunz
atoare:
ker U ( ) si Im U ( ) sunt subspatii n X, iar ker U ( ) si Im U ( ) sunt subspatii n Y.
Se mai observ
a si c
a, dac
a X0 este subspatiu n X, atunci X0?
= X0 .
165
x1 ). S
a se determine adjunctul
operatorului U ( ), dac
a pe R2 se consider
a produsul scalar uzual, iar pe R3 se consider
a produsul
scalar de la punctul 1.
10 3 0
6
6
3.4.13. Solution.
(1) h(x1 ; x2 ; x3 ) ; (y1 ; y2 ; y3 )iR3 =
6 3 2 1
4
0 1 1
3
produsul scalar pe R este simetric
a, iar cu metoda Jacobi
[x]TE3
7
7
7 [y]E3 . Matricea care deneste
5
0
= 1,
= 10,
= 11,
= 20
10
166
3T 2
10 3 0
32
z1
6 7 6
76
7
6 7 6
76
7
6 1 7 6 3 2 1 7 6 z2 7 = 13z1 + 6z2 + 2z3 =
4 5 4
54
5
1
0 1 1
z3
8
>
z =2 ;
>
>
< 1
z2 = 7 ;
>
>
>
:
z3 = 8 :
(2) e1 = (1; 0; 0), e2 = (0; 1; 0), e3 = (0; 0; 1) f1 = e1 , f2 =
2 3T 2
32
1
10 3 0
6 7 6
76
he1 ; e2 iR3
3
6 7 6
76
=
he1 ; e2 iR3 = 6 0 7 6 3 2 1 7 6
he1 ; e1 iR3
10
4 5 4
54
0 1 1
0
8
< 16z1 + 8z2 + 3z3 = 0
0
, solutia este:
: 13z + 6z + 2z = 0
1
2
3
167
x2 ; x1
) hx1 ; x1 i
x2 )kY = kx1
x2 kX )
x2 i = hU (x1
x2 ) ; U (x1
hx1 ; x2 i
hU (x2 ) ; U (x2 )i )
x2 )i = hU (x1 )
U (x2 ) ; U (x1 )
U (x2 )i )
hU (x1 ) ; U (x2 )i
hU (x2 ) ; U (x1 )i +
168
x2 k = kf (x1 )
x1 + x2
1
= kx1
2
2
x2 k,
x1 + x2
1
= kf (x1 ) f (x2 )k.
2
2
x1 + x2
f (x1 ) + f (x2 )
Din "Caracterizarea mijlocului" rezult
af
=
(ecuatia functional
a Jensen).
2
2
1
x1
= f (x1 )
Pentru x2 = 0 ) f
2
2
f (x1 ) + f (x2 )
x1 + x2
1
)
=f
= f (x1 + x2 ) ) f (x1 + x2 ) = f (x1 ) + f (x2 ) (aditivitate)
2
2
2
b) Omogenitate:
f (x1 ) = f (x2 )
2 R si e (
n )n2N
Are loc:
kf ( x)
=j
f (x)k
n j kxk+j
kf ( x)
n j kf
f(
n x)k
(x)k = j
+ kf (
n x)
n j (kxk
f (x)k = k x
n xk
+k
+ kf (x)k), 8n 2 N. Pentru c
a
nf
n
(x)
f (x)k =
si (kxk + kf (x)k)
este un num
ar xat, se obtine c
a f ( x) = f (x) (omogenitate)
2. Din relatia initial
a cu x2 = 0 se obtine kxk = kf (x)k si cum f ( ) este operator liniar rezult
a c
a este
si izometrie.
169
x2 k = kf (x1 )
f (x2 )k
() =
1]
= AT :
3.4.25. Remark.
kU (x)k = kxk (conserv
a norma);
kU (x)
U (y)k = kx
yk (conserv
a distanta);
170
3.4.26. Proposition. Orice valoare proprie a unui operator unitar este num
ar real sau complex de modul
unitar [chiar dac
a operatorul este ortogonal, valorile proprii pot complexe].
Proof. hU (x1 ) ; U (x2 )i = h x1 ; x2 i =
proprii
= 1; adic
aj j=1
cos
sin
1
1
sin
cos
..
.
cos
sin
m
m
sin
cos
1
...
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
1 A
Geometric, aceast
a structur
a nseamn
a rotatii (f
ar
a omotetii) n planele ortogonale corespunz
atoare valorilor proprii complexe; directiile corespunz
atoare valorilor proprii reale combinate dou
a cte dou
a formeaz
a
plane ortogonale n care se realizeaz
a rotatii de 0 sau de 180 , la care pentru cazul impar se adaug
ao
simetrie sau o identitate pe ultima directie.
2
cos
sin
sin
cos
32
54
x1
x2
5.
x1 (x1 cos
hx; U (x)i
=
kxk kU (x)k
cos ) (x;\
U (x)) = .
+ x1 sin )
171
==
x21 cos
v2 = 4
sin
+x
cos
5,
sin
= a si atunci sin
obtine a =
d, b = c ) A = 4
2
sin = b ) A = 4
cos
sin
sin
cos
a
b
3
5.
b
a
5, cu
5=4
1 0
ac + bd c + d
0 1
3
a b
5, cu a2 + b2 = 1. Se
b a
2
3
cos
sin
5.
= b, iar forma general
a este A = 4
sin
cos
a2 + b2 ac + bd
a2
1, atunci din 4
b2 =
a b
c d
1. Se consider
a
3
5
2
4
5 se
=a)
3
5
172
U ) ( ) = (U
U) ( )
(operatorul comut
a cu adjunctul s
au) [pentru ca operatiile de compunere s
a aib
a sens, operatorul trebuie
s
a aib
a ca domeniu si codomeniu acelasi spatiu].
3.4.32. Proposition. Orice vector propriu al unui operator normal, atasat valorii proprii , este vector
propriu al operatorului adjunct, atasat valorii proprii :
; adic
a
operatorul U ( ) transform
a vectorii proprii ai operatorului U ( ) corespunz
atori valorii proprii
tot n
x; y ;
173
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
A
...
m+1
..
m+1 ;
aparitii al ec
arei valori proprii ind egal cu ordinul ei de multiplicitate.
3.4.34. Remark. Operatorul normal reprezint
a o transformare format
a din rotatii cu omotetii n m plane
ortogonale dou
a cte dou
a si (numai) omotetii n celelalte r
m directii ortogonale.
j
j
reprezint
a o omotetie de coecient
A=
2
j
q
2
j
2
j
2
j
0
B
@
p
p
j
2
2
j+ j
j
2+ 2
j
j
j
2
2
j+ j
j
2+ 2
j
j
1
C
A
p
j
2
2
j+ j
174
Proof. Fie x =
n
P
xi ei ; y =
i=1
n
P
n
P
j=1
aki ek ; deci
*
+
n
n
n P
n
P
P
P
hU (x) ; yi = U
xi ei ;
yj ej =
xi yj hU (ei ) ; ej i =
U (ei ) =
k=1
i=1
n P
n
P
j=1
n
P
i=1 j=1
n P
n
P
xi yj
aki ek ; ej =
xi yj aji si analog,
i=1 j=1
k=1
*
!+
n
n
n P
n
P
P
P
hx; U (y)i =
xi e i ; U
yj ej
=
xi yj hei ; U (ej )i =
=
i=1 j=1
i=1
n P
n
P
xi yj ei ;
i=1 j=1
j=1
n
P
akj ek
i=1 j=1
n P
n
P
i=1 j=1
k=1
urmeaz
a c
a aij = aji ; adic
a matricea este simetric
a (A = AT )
3.4.38. Theorem. Valorile proprii ale unui operator real simetric sunt reale.
Proof. Av = v prin nmultire la stnga cu v ) v T Av = v T v = v T v
Av = v prin conjugare complex
a ) Av = v si apoi dup
a nmultire la stnga cu v T ) v T Av =
vT v = vT v
dar v T Av
= v T AT v T
kvk2 =
= v T Av )
kvk2 )
3.4.39. Theorem. Vectorii proprii asociati la valori proprii distincte ale unui operator real simetric sunt
ortogonali doi cte doi.
9
>
>
Av1 = 1 v1 )
=
>
>
>
>
>
T
T
Av2 = 2 v2 ) v1 Av2 = 2 v1 v2 >
>
>
=
v2T Av1
Proof.
T
1 v2 v1
>
>
>
>
T
T
v2 v1 = v1 v2 >
>
>
>
>
>
;
=
6
1
2
) v1T v2 = 0
175
U ( ):
3.4.43. Proposition. Un operator antiautoadjunct are numai valori proprii pur imaginare.
Proof. Din U ( ) =
U ( ) rezult
a c
a
; adic
a partea real
a a valorii proprii este nul
a
0
0
3.4.44. Remark. Celulele matricii n baza canonic
a Jordan real
a cap
at
a forma special
a@
=
A,
0
care din punct de vedere geometric nseamn
a omotetii si rotatii de 90 n plane ortogonale dou
a cte dou
a.
j
T 2
de activitate E 4
la:
rioad
a determinat
a (10 s
apt
amni). Activitatea necesar
a este prezentat
a n tabel:
s
apt
amni
S 4
10 10 10
Din motive legate de legislatie, taxe si impozite, ecare este obligat s
a declare, s
a primeasc
a si s
a pl
ateasc
a
sume rezonabile corespunz
atoare activit
atilor desf
asurate. Valoarea/pretul cte unei s
apt
amni de activitate pentru ecare trebuie s
a e n jur de 1000Euro, iar rmele cad de acord s
asi ajusteze valorile asa
nct ecare s
a pl
ateasc
a exact ct primeste (sistem de barter schimb n natur
a n trei: de fapt, ecare
4 Aceast
a sectiune
176
pl
ateste cu serviciul pe carel ofer
a serviciile pe care le primeste). Not
am cu pT , pE , pS pretul primit
de ecare rm
a pentru o s
apt
amn
a de servicii. Conditia de
8 echilibru cere ca totalul primit de ecare
>
2pT + pE + 6pS = 10pT ;
>
>
<
s
a e egal cu totalul pl
atit de ecare. Se obtine sistemul:
4pT + 5pE + 1pS = 10pE ; Prima ecuatie
>
>
>
:
4pT + 4pE + 3pS = 10pS :
se refer
a la T iar interpretarea ei este: n perioada dat
a rma T este pl
atit
a pentru activitatea ei 10pT
(membrul drept) si are de plat
a 2 s
apt
amni de activitate c
atre ea ns
asi la pretul pT , 1 s
apt
amn
a de
activitate c
atre E la pretul pE si 6 s
apt
amni de activitate 8
c
atre S la pretul pS , care dau un total de
31
>
> p T = pE ;
>
>
32
<
2pT + pE + 6pS (membrul stng). Solutia sistemului este:
Printre solutiile posibile se
pE = pE ;
>
>
>
>
: pS = 9 pE :
8
8 8
31
>
>
>
pT =
1000 = 968:75
p = 1000
>
>
>
>
32
<
< T
num
ar
a: pE = 1000 )
PT = 1000 )
pE = 1000
pE = 1032:3 Sistemul se poate scrie
>
>
>
>
>
>
>
:
: pS = 9 1000 = 1125
pS = 1161:3
8
2
3
32
2
3 2
3
2
1
6
2
1
6
p
p
T
T
6 10 10 10 7
6 10 10 10 7 6
7 6
7
6 4
5
1 7
6 4 5 1 76
7 6
7
7 sa obtinut prin mp
artirea ec
arei
ca 6 10 10 10 7 6 pE 7 = 6 pE 7. Matricea E = 6
6
54
4
5 4
5
10 10 7
4 10
5
4
4
3
4
4
3
pS
pS
10
10
10
10 10 10
coloane la suma ei si este un exemplu de matrice inputoutput sau matrice de schimb.
177
nct
v 0 > Av 0 :
6=
coordonat
a a vectorului u este strict mai mare dect coordonata corespunz
atoare a vectorului v)
178
Proof.
Av = v ) vi =
j j vi0
vi
vi0
) j j vi0
j=1
n
P
aij vj ) j j jvi j
vi
vi0
max
j
j vi0 vv0i
i
n
P
vj
vj0
n
P
aij vj0
j=1
n
P
j=1
< max
j
vj
vj0
vj
vj0
j=1
aij jvj j )
vj
vj0
vi0 )
vi0 8i = 1; n )
dac
a se alege n locul lui i indicele n care membrul drept atinge maximul, e acesta r, relatia devine:
j j vr0
vr
vr
< 0 vr0 ) j j < 1:
0
vr
vr
A)
A) I + A + A2 +
Am = I
Am+1
devine
A) I + A + A2 +
(I
deci exist
a (I
A)
Am +
= I + A + A2 +
Am +
= I;
A)
A)
A) x
si are loc
1
y = (I + A + A2 +
= y + Ay + A2 y +
Am +
Am y +
)y =
relatie interpretabil
a astfel: pentru obtinerea unei productii nale nete y trebuie produs
a cantitatea Ay
intermediar
a necesar
a producerii lui y; pentru care trebuie produs A2 y necesar lui Ay;
: Productia
179
total
a x a fost descompus
a n productie nal
a y si n productii intermediare Am y date de matricile de
consumuri intermediare Am :
Problema caracteriz
arii matricilor productive admite si o reciproc
a:
3.5.6. Theorem. Dac
a pentru matricea p
atratic
a pozitiv
a A matricea (I
A)
exist
a si este pozitiv
a,
A)
6=
v>0)x
6=
6=
6=
(1) Fie
j=1
j
2
6
6
e p = 6
4
M j jvi j
n
X
j=1
n
X
jaij vj j
j=1
jaij j jvj j ; 8i = 1; n;
jv1 j
7
.. 7
. 7 = jvj ; cum aij > 0; are loc
5
jvn j
j
Mj p
Ap;
M j I) p;
M j jvk j
<
n
X
j=1
; ng astfel nct
akj jvj j
6=
6=
M j I) p
= A2
dar
Az = A (A
M j Ap
180
deci
A2 p = Az + j
cu B =
1
A>
"+j M j
6
=
M j Ap
> "Ap + j
6=
M j Ap
= (" + j
M j) Ap;
0; are loc
BAp > Ap
6=
din faptul c
a
c
a 0 > Ap contradictie cu existenta indicelui k, asa c
a are loc
6=
Mj p
= Ap
(2) Se aplic
a 1. pentru matricea A + "U; cu U matricea de ordin n care are toate elementele 1; dup
a
care se trece la limit
a
Dac
a z = (z1 ;
Dac
a u; v 2 Cn , atunci u + v = u + v
Dac
a
2 C si z 2 Cn , atunci z = z
hu + v; wi = hu; wi + hv; wi
hu; v + wi = hu; vi + hu; wi
h u; vi =
hu; vi
hu; vi =
hu; vi
hu; vi = hv; ui
kuk2 = hu; ui =
n
P
i=1
ui ui 2 R+ si kuk = 0 () u = 0.
181
CHAPTER 4
A ne Spaces
[26], [38]
4.1. Denitions
4.1.1. Denition. Fie un spatiu vectorial (V; K). Multimea A este numit
a spatiu an peste V dac
a exist
a
o functie (notat
a aditiv) + : A
(1) P + ~0 = P
(2) P + (~v + w)
~ = (P + ~v ) + w
~
!
(3) 8P; Q 2 A, 9!~vP;Q 2 V, P + ~vP;Q = Q [vectorul ~vP;Q este notat P Q]
(A; V; K)
Un spatiu an este o actiune tranzitiv
a a grupului aditiv al spatiului vectorial
Dimensiunea spatiului an este prin denitie dimensiunea spatiului vectorial atasat.
Observatii:
Semnul "+" poate avea semnicatii diferite:
+
~v + w
~ adunarea a doi vectori din V
P + ~v functia care deneste structura an
a [adunarea dintre un punct si un vector de pozitie]
n contextul grupurilor, functiile care satisfac 1. si 2. sunt numite "actiuni" iar cele care satisfac
si 3. sunt numite "actiuni tranzitive"
!
Aplicatia A 3 Q 7! P Q 2 V este o bijectie.
Spatiul an standard atasat unui spatiu vectorial este A = V with vector addition as the a ne
mapping.
the set V has elements
as a vector space, an element is a vector
as an a ne space, an element is a point
the vector ~0 is the origin of the a ne space [?]
!
any point P may be seen as the "position vector" OP , with O = ~0.
!
!
!
!
the relation P + P Q = Q may be seen as OP + P Q = OQ
!
for each point P , the set P Q of all vectors with origin in P is a vector space.
184
4. AFFINE SPACES
4.1.2. Denition. [A ne subspace] [linear subvarieties] [linear varieties] If (V0 ; K) is a vector subspace
of (V; K) and B
; Pn i, is the smallest
D !
!E
The dimension of the linear subspace P1 P2 ;
; P1 Pr is the dimension of the linear variety. When
D !
!E
the dimension of P1 P2 ;
; P1 Pr is maximal, which means r 1, the set of points is called a nely
independent.
4.1.4. Denition. [The sum of two linear varieties] L1 + L2 is the smallest linear variety containing both
varieties.
4.1.5. Denition. Two linear varieties P + [V1 ] and Q + [V2 ] are called "parallel" when V1
V2
V2 or
V1
; qn )
; en )g, where
n
! P
; qn 2 K, P Q =
qi ei
i=1
!
!
1
4.1.8. Denition. Barycenter: Given r points P1 , P2 ,
, Pr , the barycenter G is: G = P1 + P1 P1 + P1 P2 +
r
!
!
1
For two points, G is the midpoint G = P1 +
P1 P 1 + P1 P 2
2
4.1.9. Denition. Collinear points? P , Q, R are collinear when the linear variety generated by them,
D ! !E
hP; Q; Ri has dimension 1 (it is a straight line). The linear subspace P Q; P R has dimension 1.
4.1.10. Denition. Simple ratio: Consider three collinear points A, B, C 2 A. The simple ratio
!
!
(A; B; C) is the unique scalar such that AB =
AC
4.1. DEFINITIONS
185
~v ) = (P + ~v ) + ( ~v ) = (Q + ~v ) + ( ~v ) = Q + (~v
~v ) =
Q + ~0 = Q
!
P Q = ~0 () P = Q [the null vector is any vector which has the same origin and endpoint]
!
!
Proof: From 3., if P = Q then P + P P = P and since P P is unique and ~0 also satises the
!
relation, P P = ~0.
!
If P Q = ~0, then P + ~0 = Q and from 1. P = Q.
!
!
P Q = QP [the negative vector means the vector with the opposed direction]
!
!
!
!
Proof: since P + P Q = Q and Q + QP = P , it follows P + P Q + QP = P which means
!
!
that P Q + QP = ~0.
!
!
!
P Q + QR = P R [Chaslesidentity] [vector addition satises the parallelogram law] [Axiom [A
1] in [26], page 98]
!
!
!
!
!
!
Proof: P + P Q = Q, Q + QR = R, P + P R = R ) P + P Q + QR = Q + QR = R )
!
!
!
!
!
P + P Q + QR = R ) P Q + QR = P R.
!
8P 2 A, 8~v 2 V, 9!Q 2 A, P Q = ~v [surjectivity]
!
Proof: For Q = P + ~v , we have P Q = ~v .
!
!
P Q = P R ) Q = R [injectivity]
!
!
!
!
!
!
Proof: P + P Q = Q, P + P R = R; if P Q = P R then P + P Q = P + P R ) Q = R.
!
Previous surjectivity and injectivity for the mapping P 7! P Q are the axiom [A 2] in [26], page
98.
!
!
!
!
P Q = RS ) P R = QS [parallels between parallels are equal]
Proof:
!
P + PQ = Q
!
R + RS = S
!
P + PR = R
!
Q + QS = S
!
!
!
!
!
!
P R = P Q + QR = QR + RS = QS.
186
4. AFFINE SPACES
, Pn 2 A, hP1 ;
Proof:
D !
P1 + P1 P2 ;
D !
; Pn i = P1 + P1 P2 ;
!E
; P1 P r
!E
; P1 Pr is a linear variety which contains all the points Pk and which is contained in
maximal (r
h
D !Ei
(P + [V1 ]) + (Q + [V2 ]) = P + V1 + V2 + P Q
Proof:
P: [Grassmann Formulas]
Consider L1 = P + [V1 ], L2 = Q + [V2 ]
If L1 \ L2 6= ;, then dim (L1 + L2 ) = dim L1 + dim L2
dim (L1 \ L2 )
dim (V1 \ V2 ) + 1
Proof:
P:
If L1 jjL2 and L1 \ L2 6= ;,then L1
L2 or L2
L1
4.1. DEFINITIONS
187
Proof:
P: [Parallels between parallels are equal]
Consider two parallel straight lines r and s cut by two parallel straight lines r0 and s0 . Denote their
!
!
!
!
intersections A = r0 \ s, B = s0 \ s, C = r \ r0 , D = r \ s0 . Then AB = CD and AC = BD.
Proof:
; en )g and R0 = fP 0 ; (f1 ;
; fn )g.
i=1
n
! P
PX=
y j fj
0
j=1
n
! P
PP0 =
pi e i
i=1
fj =
n
P
aij ei
i=1
n
n
P
! P
P 0X =
y j fj =
yj
j=1
j=1
!
!
!
P 0X = P 0P + P X =
n
P
aij ei
i=1
n
P
pi e i +
i=1
n
P
n P
n
P
yj aij ei =
j=1 i=1
n
P
xi ei =
i=1
) 8i = 1; n, xi pi =
aij yj )
j=1
0
1
0
1
a1n C 0
B a11
x1
B
C
B
C B a21
CB
a
2n
B . C B
CB
) B .. C = B
CB
C@
@
A B
B
C
@
A
xn
an1
ann
[X]R = A [X]R0 + [P 0 ]R
n
P
n
P
i=1
(xi
pi ) ei
n
P
yj aij
j=1
ei
i=1
1 0
y1
C B
.. C B
. C+B
A @
yn
1
p1
C
.. C
. C
A
pn
; qn ) and (v1 ;
; en )g,
; vr ) a basis of V0 , and vj =
n
P
i=1
aij ei ,
188
4. AFFINE SPACES
1
a1j
B
C
B . C
[vj ]R = B .. C, j = 1; r
@
A
anj
We have X = (x1 ;
[X]R = [Q]R +
r
P
; xn ) 2 L () 9
j
j=1
2 K s.t. xi = qi +
[vj ]R
r
P
j aij ,
j=1
8i = 1; n
a
b
B 1 C
B 1 C
B .. C
B .. C
[A]R = B . C, [B]R = B . C
@
A
@
A
an
bn
For = 1, Q = A and [B]R = [A]R + [v]R ) [v]R = [B]R
So, for the line: [X]R = [A]R + ([B]R
[A]R
[A]R ), we have:
[A]R ) ;
[A]R ) ;
2 [0; 1]g
2 Rg
[A]R ) ;
2 [0; 1)g
[A]R ) ;
2 ( 1; 1]g
x=2 +1
y=
+2 )
z=
+1
x=3
2z
y=3
z=z
Barycenter: Given r points P1 , P2 ,
, Pr , the barycenter G is:
!
!
!
1
G = P1 +
P1 P1 + P 1 P2 +
+ P1 Pr
r
!
!
!
!
!
!
1
Remark:
P1 P2 +
+ P 1 Pr = P1 G ) P1 P2 +
+ P1 P r = r P 1 G )
r
!
!
!
!
!
) P1 G + P 1 G P 1 P2 +
P 1 G P 1 Pr = 0 )
!
!
!
!
!
) P1 G + P2 P1 + P 1 G +
P r P 1 + P1 G = 0 )
!
!
!
!
!
!
) P1 G + P2 G +
Pr G = 0 or GP1 + GP2 +
+ GPr = 0
!
!
!
P: The barycenter is the unique point X such that XP1 + XP2 +
+ XPr = 0
4.1. DEFINITIONS
189
Proof:
1. G satises the relation:
!
G + GPj = Pj
!
!
!
For j = 2; r, GPj = GP1 + P1 Pj
!
!
!
!
!
!
!
!
GP1 + GP2 +
+ GPr = GP1 + GP1 + P1 P2 +
+ GP1 + P1 Pr =
!
!
!
!
!
= r GP1 + P1 P2 +
+ P1 Pr = r GP1 + r P1 G =
2. G is unique:
!
!
!
!
!
!
If P1 G + P2 G +
Pr G = 0 and XP1 + XP2 +
+ XPr = 0, then, by adding them:
!
!
!
!
!
!
XP1 + P1 G + XP2 + P2 G +
+ XPr + Pr G = 0 )
!
!
) r XG = 0 ) XG = 0
!
!
1
Remark: we also have G = Pi +
Pi P 1 +
+ P i Pr
r
!
!
1
P 1 P1 + P1 P2
For two points, G is the midpoint G = P1 +
2
0
0
1
1
p
g
B 1j C
B 1 C
1 Pr
B .. C
B .. C
pij
If [Pj ]R = B . C, then G = B . C, with gi =
r j=1
@
@
A
A
pnj
gn
Remark: Consider three collinear points A, B, C 2 A.
!
!
AB = (A; B; C) AC
!
!
B = A + AB = A + (A; B; C) AC
If (A; B; C) = , then:
1
(A; C; B) =
(B; A; C) =
1
1
(B; C; A) =
(C; A; B) =
1
(C; B; A) = 1
n
!
The line segment AB = X 2 A; X = A + AB;
C 2 AB () 0 < (A; C; B) < 1
o
2 [0; 1]
C 2 AB () (C; A; B) < 0
P: Consider three noncollinear points A, B, C 2 A, and G their barycenter (centroid/geometric
center). Then the straight line joining A and G meets BC in the midpoint A0 of the points B, C.
2
Moreover, (A; G; A0 ) = . [AA0 is called "median"]
3
Proof:
ThalesTheorem: Consider three parallel straight lines r, s, t that meet two concurrent straight lines
in A, A0 (on r), in B, B 0 (on s), in C, C 0 (on t). Then (A; B; C) = (A0 ; B 0 ; C 0 ).
190
4. AFFINE SPACES
Proof:
MenelausTheorem: If a straight line meets the sides of a triangle ABC at P , Q, R, respectively, then
(P; A; B) (Q; B; C) (R; C; A) = 1.
Proof:
Cevas Theorem: Consider a triangle ABC and three points on its edges PA 2 [BC], PB 2 [AC],
PC 2 [AB]. The straight lines APA , BPB , CPC are concurrent in a point P () (PA ; B; C) (PB ; C; A)
(PC ; A; B) =
1.
Proof:
Pappus Theorem:
Proof:
Desargue Theorem:
Proof:
?Rouths Theorem:
Proof:
?van Obels Theorem:
Proof:
Bib:
Part 2
CHAPTER 5
Geogebra
CHAPTER 6
CARMetal
Produsul software CARMetal, versiunea 3.7.5, disponibil la adresa http://db-maths.nuxit.net/CaRMetal/,
a fost folosit pentru a genera unele dintre vizualiz
arile incluse n text.
Part 3
Appendices
Reviews
Binary Logic
Logica matematic
a binar
a se ocup
a de operatii cu enunturi logice si de evaluarea valorii lor de adev
ar;
se consider
a numai enunturi logice cu o valoare de adev
ar din dou
a posibile. Desi aceast
a conventie este
restrictiv
a, nu este scopul prezent
arii de fata s
a includ
a alte situatii, cum ar enunturile logice "Eu mint",
sau "Acest enunt este fals". Prezentarea de mai jos se refer
a doar la acele enunturi c
arora li se poate atasa
o valoare de adev
ar. Nu sunt incluse comenzile (enunturile imperative), interogatiile, exclamatiile, ci doar
enunturile de tip declarativ.
ep
(NON): 0 1
(1) NEGA
TIA LOGICA
1 0
cas
a" ) ep: " Ioana nu are o cas
a".
p
p^q
0 0 0
(
(2) CONJUNC
TIA LOGICA
SI): 0 1 0
1 0 0
1 1 1
de cas
a" ) p ^ q: "Ioana are o cas
a si o asigurare de cas
a" p: "2 este num
ar natural", q: "2 este
num
ar ntreg" ) p ^ q: "2 este si num
ar natural si ntreg" n limbaj formal: (2 2 N) ^ (2 2 Z) )
2 2 N \ Z (2 face parte din intersectia multimilor)
p
p_q
0 0 0
(SAU) (SAU INCLUSIV): 0 1 1
(3) DISJUNC
TIA LOGICA
1 0 1
1 1 1
"Ioana are o masin
a" ) p _ q: "Ioana are o cas
a sau o masin
a"
200
7. REVIEWS
_
p_q
0 0 0
(4) (SAU EXCLUSIV):
0 1 1
1 0 1
1 1 0
_ "Ioana are e o cas
conectori. Ex: p: "Ioana are o cas
a", q: "Ioana are o masin
a" ) p_q:
a, e o
masin
a (dar nu ambele)"
(Dac
(5) IMPLICA
TIA LOGICA
aAtunci):
(a) Din adev
ar implic
a numai adev
ar
(b) Din fals implic
a orice.
p
p!q
0 0 1
Din denitie rezult
a urm
atoarea tabl
a de adev
ar pentru implicatia logic
a:
0 1 1
1 0 0
1 1 1
p$q
0 0 1
LOGICA
(Dac
(6) ECHIVALEN
TA
asinumaidac
a):
0 1 0
Echivalenta logic
a poate
1 0 0
1 1 1
denit
a folosind ceilalti conectori: p $ q
BINARY LOGIC
201
(2) p _ q
(3) p _ (q _ r)
(4) p _ 1
(5) p _ 0
(6) p _ p
(7) p ^ q
(8) p ^ p
(9) p ^ (q ^ r)
(10) p ^ 0
(11) p ^ 1
(12) e (p _ q)
tradus
a: "n vacanta nu merg nici la mare si nici la munte".
(13) e (p ^ q)
(14) p ^ (q _ r)
adev
ar
(15) p _ (q ^ r)
(p _ q) ^ (p _ r) Dem: tabl
a de adev
ar
202
7. REVIEWS
(p _ q) ^e (p ^ q)
(p^eq) _ (q^ep) :
Proof. Tabl
a de adev
ar.
7.0.18. Theorem. [Reducerea echivalentei la implicatie]
p$q
(p ! q) ^ (q ! p) :
Proof. Tabl
a de adev
ar.
7.0.19. Theorem. (Reducerea implica
tiei logice la opera
tii elementare)
p!q
ep _ q:
e (ep _ q)
e (ep) ^eq
p^eq.
p^eq.
p^eq:
"Mie foame si nu m
annc".
Un r
aspuns frecvent ntlnit la ntrebarea "Cum se neag
a "Dac
a mie foame, m
annc"?" este r: "Dac
a
nu mie foame, nu m
annc".
Se observ
a c
ar
p
p!q
0 0 1
0 1 1
1 0 0
1 1 1
Se mai observ
a c
ar
(ep) ! (eq)
(eep) _ (eq)
p_eq
q ! p asa c
a r spune, de fapt, acelasi
ep _ q
q_ep
e (eq) _ep
(eq) ! (ep).
(eq) ! (ep).
7.2. SETS
203
9x; ep (x) ;
e (9x; p (x))
8x; ep (x)
7.1.5. Example. Negatia enuntului logic: Toti oamenii sunt muritorieste: "Exist
a oameni nemuritori",
deoarece structura este e (8x p (x))
9xep (x).
204
7. REVIEWS
Reuniune:
A [ B = fx 2 ; x 2 A _ x 2 Bg
Intersectie:
A \ B = fx 2 ; x 2 A ^ x 2 Bg
Complementar
a:
C A = fx 2 ; x 2 A ^ x 62 g
Incluziune:
Egalitate:
A = B () (A
B () (8x 2 A; x 2 B)
B^B
Multimea p
artilor: P ( ) = fA; A
A)
7.2. SETS
205
(13) A [ (A \ B) = A;
(14) A \ (A [ B) = A;
(15) A [ CA = ;
(16) A \ CA = ;;
(17) CCA = A;
S
(18) C (A [ B) = CA \ CB; C
Ai
i2I
(19) C (A \ B) = CA [ CB; C
i2I
Ai
i2I
i2I
(B [ C) = (A
B) [ (A
C);
(6) A
(B \ C) = (A
B) \ (A
C);
(7) A
(B n C) = (A
B) n (A
C);
7.2.2. Relations.
Y. X
se numeste domeniul de denitie al relatiei, Y se numeste codomeniul relatiei iar GR se numeste gracul
relatiei. Multimea
DR = fx 2 X; 9y 2 Y; (x; y) 2 GR g
206
7. REVIEWS
= (Y; X; GR 1 ) denit
a prin
= f(y; x) ; (x; y) 2 GR g
= (X; X; G
) denit
a prin G
= f(x; x) ; x 2 Xg
7.2.7. Denition. Fie X, Y , Z trei multimi si relatiile R1 = (X; Y; GR1 ), R2 = (Y; Z; GR2 ). Relatia
R = (X; Z; GR ) denit
a prin:
GR = f(x; z) ; x 2 X; z 2 Zsi 9y 2 Y a.. (x; y) 2 GR1 si (y; z) 2 GR2 g
se numeste compunerea relatiilor R1 si R2 si se noteaz
a R2 R1 (R2 R1 = R).
7.2.8. Remark. Operatia de compunere a relatiilor este asociativ
a dar nu este comutativ
a.
7.2.9. Denition. R = (X; X; GR ) se numeste relatie de preordine dac
a are propriet
atile:
(1) Reexivitate: G
GR (8x 2 X, (x; x) 2 GR );
= G
=X
X)
7.2. SETS
207
si (y; z) 2 GR
(x; z) 2 GR 1 ;
Antisimetria: (x; y) 2 GR
si (y; x) 2 GR
) (x; y) 2 GR si (y; x) 2 GR ) x = y
X, multimea
f (A) = ff (x) ; x 2 Ag
Y multimea
208
7. REVIEWS
(1)
8A 2 P (X) ; 8B 2 P (Y ) ; f (A)
B,A
(B)
(2)
8A 2 P (X) ; f f
(A)
(f (A))
(3)
8A 2 P (X) ; 8B 2 P (Y ) ; f A \ f
(4)
8 (Bi )i2I
P (Y ) ; f
Bi
i2I
(5)
P (Y ) ; f
8B 2 P (Y ) ; f
8 (Bi )i2I
[
\
Bi
i2I
(6)
P (X) ; f
P (X) ; f
8 (Ai )i2I
Ai
i2I
i2I
Ai
!
!
(Bi )
(Bi )
i2I
i2I
(CB) = Cf
(8)
(B) = f (A) \ B
(7)
8 (Ai )i2I
(B)
f (Ai )
f (Ai )
i2I
i2I
Proof. Exercise.
209
g) ( ) : D ! R, denit
a prin (f
Functiact h ( ) : D1 ! R, denit
a prin h (x) :=
f (x)
g(x)
g)(x) := f (x)
g(x) 8x 2 D
8x 2 D1 pe multimea D1 = fx 2 Djg(x) 6=
0g.
Existenta unei structuri de ordine pe codomeniul comun al functiilor permite extinderea relatiei de
ordine si la functii; dac
a f ( ) ; g ( ) : D ! R sunt dou
a functii cu codomeniul real denite pe aceeasi
multime D, atunci se spune c
a f()
g ( ) dac
a are loc f (x)
de ordine ntre functii, care pierde din caracteristicile initiale ale relatiei dintre elemente (noua relatie
nu mai este total
a, n sensul c
a pentru dou
a functii se poate ntmpla s
a nu e comparabile, chiar dac
a
elementele codomeniului sunt toate comparabile). De asemenea, se extind la functii notiunile de maxim,
minim, modul:
h ( ) : D ! R, h(x) := max(f (x); g(x)) (maximul a dou
a functii), k ( ) : D ! R, k(x) := min(f (x); g(x))
(minimul a dou
a functii), jf j ( ) : D ! R, jf j (x) := jf (x)j (modulul unei functii).
7.3. Usual Sets of Numbers
Multimea Denitia
Denumirea
f1; 2;
f0; 1; 1; 2; 2;
a
;
b
; n;
naturale
g ntregi
; n; n;
a 2 Z; b 2 N
rationale
Se va deni ulterior
reale
fa + bi; a; b 2 Rg
complexe
N
6=
6=
6=
6=
p
3
2 Q n Z, 2 2 R n Q.
2
p
Demonstratie: Presupunem prin reducere la absurd c
a 2 2 Q. Atunci exist
a a; b 2 N astfel nct
p
a
2 = ) 2b2 = a2 ) a este multiplu de 2 ) a = 2k ) 2b2 = 4k 2 ) b2 = 2k 2 ) b este multiplu de 2 )
3 2 Z n N,
210
7. REVIEWS
p
a
se poate simplica prin 2. Deci, dac
a 2 ar rational, atunci ar o fractie simplicabil
a prin 2
b
de oricte ori, ceea ce este o contradictie
fractia
Cu num
ar nit de zecimale
1; 2
zecimale Cu num
ar innit de zecimale periodice simple 0; (3)
periodice mixte
Fie a 2 f0; 1;
0; 2 (3)
neperiodice
a
; 9g. Atunci num
arul x = 0; (a) = :
9
Dem:
1
1
x = 0; (a) = 0; aaaaaaaa
=
(a; (a)) =
10
10
1
=
(a + x) ) 10x = a + x ) 9x = a ) x =
10
a1
ak
a1
ak
Analog, 0; (a1
=
ak ) = k
10
1
99
| {z 9}
(a + 0; (a)) =
a
:
9
k ori
= 3; 141592653589793238462643383279502 8841971693993751058209749445923078164062862089986280348
e = 2; 71828182845904523536028747135266249775724709369995957496696762772407663035354759457138217
ln 2 = 0; 693147180559945309417232121458176568075500134360 25525412068000949339362196969471560586
7.3.2. The Set of Real Numbers.
Pe multimea numerelor rationale se consider
a multimea sirurilor Cauchy. Dou
a siruri Cauchy
rationale (pn )n2N si (qn )n2N sunt echivalente dac
a siruldiferenta (pn
qn ) =
0. Un num
ar real este o clas
a de echivalenta format
a din toate sirurile Cauchy de numere rationale
echivalente cu un sir Cauchy xat. Detaliile acestei constructii fac parte din cursul de Analiz
a.
Structura (R; +) este grup abelian cu element nul 0.
Structura (Z; +) este subgrup n structura (R; +);
Structura (Z; +) este grup ciclic, n sensul c
a este generat
a de un singur element, elementul
1 (sau
1).
Structura (Z; +) este cel mai mic subgrup n (R; +) care contine 1.
7.3.3. The Set of Complex Numbers. Informatiile si rezultatele din aceast
a subsectiune sunt
rezumate din [2]1 si din [22].
Se consider
a multimea R2 mpreun
a cu operatiile:
1 Lucrarea
poate consultat
a att pentru detalii de demonstatii ct si pentru exemple de exercitii de liceu (inclusiv la nivel
de olimpiad
a).
211
(x2 ; y2 ) = (x1 x2
y1 y2 ; x1 y2 + x2 y1 )
[se face distinctie ntre operatiile de adunare si nmultire pe R, notate + respectiv , si noile operatii,
notate +c si c ]
(1) Dou
a elemente (x1 ; y1 ) si (x2 ; y2 )
sunt egale () x1 = x2 si y1 = y2 .
(2) Adunarea +c :
(a) este comutativ
a,
(b) este asociativ
a,
(c) are elementul (0; 0) ca element neutru,
(d) orice element (x; y) are invers fata de adunare (opus), care este elementul ( x; y).
(3) nmultirea c :
(a) este comutativ
a,
(b) este asociativ
a,
(c) are elementul (1; 0) ca element neutru,
(d) orice element z = (x; y) 6= (0; 0) are invers fata de nmultire, care este elementul z
x
y
;
.
2
2
2
x +y
x + y2
(4) nmultirea este distributiv
a fata de adunare.
(5) Structura C = (R2 ; +c ; c ) este un corp comutativ (este identicat ca si "corp al numerelor complexe").
(6) Multimea R
f0g
+c si de nmultire c . Structura (R
(7) Se consider
a structurile (R; +; ) (structura obisnuit
a de numere reale) si (R
f() : R!R
f0g; +c ; c ). Functia
f0g denit
a prin f (x) = (x; 0) este bijectiv
a si este morsm de corpuri (deci
f (y) = f (x y)
2 Se
pare c
a notatia i pentru
1 a fost folosit
a pentru prima dat
a de matematicianul elvetian Leonhard Euler n 1777.
212
7. REVIEWS
simboluri distincte).
(11) Un num
ar complex este (x; y) = (x; 0) +c (0; y) = x +c i c y (forma algebric
a a num
arului complex)
(a) z = (x; y) = x + iy
(b) Elementul x este numit "partea real
a a num
arul complex z" [x = Re (z)]
(c) Elementul y este numit "partea imaginar
a a num
arului complex z" [y = Im (z)]
(12) Propriet
ati ale unit
atii imaginare i:
(a) i0 = 1, i1 = i,
(b) (0; 1) (0; 1) = (0 0
1 1; 0 1 + 1 0) = ( 1; 0) ) i2 =
0 si b1 + b2 = 0 () b1 = b2 = 0, asa c
a b1 + b2 > 0 deci b1 = b2
1, i4n+3 =
i, i
= ( i)n .
(13) Propriet
ati ale puterilor ntregi ale numerelor complexe:
(a) pentru z = 0 si n 2 N , 0n = 0.
1
(b) pentru z 6= 0, z 0 = 1, z 1 = z, pentru n 2 N , z n = z| {z z}, = z 1 , z
z
n ori
(c) 8n; m 2 Z, z
zn
z m = z n+m , m = z n
z
n m
, (z ) = z
nm
, (z1 z2 ) =
z1n z2n ,
(1) (z) = z
= (z 1 ) .
z1
z2
z1n
.
z2n
bi se numeste conjugatul
(6) 8z 6= 0, z
=z
213
z1
z1
=
z2
z2
z z
z+z
, Im (z) =
.
(8) Re (z) =
2
2i
(7) 8z2 6= 0,
p
a2 + b 2
jRe (z)j + jIm (z)j [n membrul stng modulul este complex iar n membrul drept modulele
sunt reale]
(9) jz1 + z2 j
(10) jjz1 j
jz2 jj
jz1 j + jz2 j
jz1
z2 j
214
7. REVIEWS
; ) ! R si arctan ( ) :
R
a astfel: arg (a + bi) =
8 ! ( ; ) iar functia arg ( ) : C n f0g ! ( ; ] este denit
b
>
>
arctan
;
a > 0; b 2 R
>
>
>
a
>
>
>
b
>
>
+ ; a < 0; b > 0
arctan
>
>
>
a
>
>
>
b
<
arctan
; a < 0; b < 0
a
>
>
>
>
;
a = 0; b > 0
>
>
2
>
>
>
>
>
;
a = 0; b < 0
>
>
2
>
>
>
: ;
a < 0; b = 0:
(8) Modulul r = jzj si argumentul ' = arg (z) al num
arului complex (perechea (r; ')) se numesc
coordonatele polare ale numarului complex.
8
< a = r cos ';
: b = r sin ':
; ]]
(b) [Formula lui DeMoivre]: z = r (cos ' + i sin ') ) z n = rn (cos n' + i sin n')
z1
r1
(c) z1 = r1 (cos '1 + i sin '1 ), z2 = r2 (cos '2 + i sin '2 ) )
= (cos ('1 '2 ) + i sin ('1 '2 ))
z2
r2
(11) Radical complex:
p
(a) [R
ad
acina complex
a de ordin n a unui num
ar complex]: z = r (cos ' + i sin ') ) n z =
p
' + 2k
' + 2k
n
r cos
+ i sin
, k = 0; n 1
n
n
p
2k
2k
(b) [R
ad
acina complex
a de ordin n a unit
atii]: n 1 = cos
+ i sin
, k = 0; n 1
n
n
r
jzj + Re (z)
(c) [R
ad
acina complex
a de ordin 2 dintrun num
ar complex]: z = a+bi = a jbj i =
2
215
; n;
1; a3 = 2; a4 =
Z = f0; 1; 1; 2; 2;
; n; n;
2;
g
(se alterneaz
a numerele ntregi negative cu cele ntregi pozitive).
Multimea Q este num
arabil
a: Un procedeu adecvat de num
arare este cel al lui Cantor:
Este clar c
a dac
a se reuseste num
ararea adecvat
a a numerelor rationale pozitive, procedeul
poate folosit si pentru num
ararea adecvat
a a numerelor rationale negative si folosind procedeul
de num
arare a numerelor ntregi (alternarea numerelor pozitive cu cele negative) se obtine o
num
arare adecvat
a a multimii Q.
Procedeul de num
arare al lui Cantor (pentru numerele rationale pozitive):
216
7. REVIEWS
Se organizeaz
a numerele rationale pozitive ntrun tabel n care linia i este ocupat
a de fractiile
cu num
ar
atorul i iar coloana j de fractiile cu numitorul j; se obtine un tablou cu un num
ar innit
de linii si de coloane, dar cu diagonalele secundare nite.
1
1
1
1
2
1
3
2
1
2
2
1
n
2
3
2
n
m
1
3
2
3
3
m
2
3
n
m
3
1
2
3
1
1
3
1
1
Diagonalele sunt:
2
2
2
1
3
1
m
n
Num
ararea pe aceste diagonale secundare duce la atingerea ec
arui termen dup
a un num
ar
nit de pasi.
Multimea R nu este num
arabil
a:
Demonstratie:
Demonstr
am c
a intervalul [0; 1] (
R) nu este num
arabil, prin reducere la absurd: presupunem
c
a ar num
arabil. Atunci ar exista un sir astfel nct [0; 1] = fx1 ; x2 ;
; xn ;
g.
1 2
;
3 3
2
;1
3
nchise si de lungimi
egale iar dintre acestea alegem ca I1 un interval care nu contine x1 . Deci dup
a alegere se obtine
intervalul I1 cu propriet
atile:
a) I1 = [a1 ; b1 ]
b) b1
a1 = 13 .
c) x1 62 I1 .
2) Pentru ecare k > 1 aplic
am intervalului Ik
ak =
bk
Ik
ak
c) xk 62 Ik .
Se obtine astfel sirul de intervale (Ik )k>0 cu urm
atoarele propriet
ati:
a) toate elementele sirului sunt intervale nchise, incluse n [0; 1] iar sirul este descresc
ator (n
sensul c
a Ik
Ik 1 )
1
.
3k
217
c) xj 62 Ik 8j = 1; k
Din aceste propriet
ati se observ
a c
a marginile intervalelor formeaz
a dou
a siruri (ak )k>0 cresc
ator si (bk )k>0 descresc
ator, toate elementele sirului (ak )k>0 ind mai mici dect toate elementele
sirului (bk )k>0 . Pentru c
a cele dou
a siruri sunt monotone si m
arginite, sunt convergente la limitele
T
notate a, respectiv b si are loc a b. Se obtine c
a intersectia
Ik = [a; b] (deci, n particular,
k>0
nevid
a) iar orice element al acestei intersectii nu poate membru al sirului (xn )n>0 (din con-
7.5.2. Remark. Multimile I si J sunt privite traditional ca multimi nite de indici (pot considerate si
innite; cnd va cazul, se va face distinctia n context); reprezentarea acestor functii se face tabelar, dar
exist
a si alte conventii. Prin conventie multimea I indexeaz
a liniile iar J indexeaz
a coloanele. O matrice
cu m linii si n coloane se mai numeste matrice de tip (m; n). Matricile de tip (m; 1) sau (1; n) se mai
numesc vectori (coloan
a, respectiv linie). Pentru ecare alegere posibil
a a indicilor de linie si de coloan
a
(i; j) se mai numeste loc (pozitie, celul
a) al (a) matricii; valoarea care se aa pe un loc se mai numeste
intrare (trebuie f
acut
a distinctie ntre locul (i; j) si elementul aij care ocup
a locul, adic
a ntre argument si
valoarea din codomeniu atasat
a argumentului). Operatiile cu matrici care vor descrise n continuare nu
au ntodeauna sens pentru expresii matematice oarecare; de obicei, diverse operatii se efectueaz
a numai
asupra unor anumite tipuri de matrici iar diferenta se face din context.
218
7. REVIEWS
7.5.3. Denition. Se numeste submatrice a unei matrici restrictia matricii la o submultime de indici:
dac
a A = (aij )i2I;j2J si I0
I, J0
1
AdjugateA
det A
7.5.8. Denition. In = (
ij )i=1;n;j=1;n
(0)i=1;n;j=1;m se numeste matrice nula (engl. null matrix ); matrice patratica ( engl. square matrix): n = m
(num
arul de linii si de coloane este egal); diagonala principala a unei matrici p
atratice: locurile (i; i),
i = 1; n; prin extindere, diagonala principal
a a unei matrici oarecare este format
a din locurile (i; i),
i = 1; min (n; m) matrice simetrica ( engl. symmetric matrix): A = AT (nu poate dect p
atratic
a);
Matrice diagonala (engl. diagonal matrix): (di
ij )i=1;n;j=1;n
219
(2) Dac
a o matrice A 2 Mn;n (R) este strict superior (inferior) triunghiular
a, atunci An = 0.
7.5.10. Remark. Dac
a exist
a matricea invers
a, este unic
a; de obicei se noteaz
a cu A 1 .
Proof. Din AB1 = B1 A = I si AB2 = B2 A = I rezult
a c
a B1 si B2 au aceleasi dimensiuni iar
B1 = B1 I = B1 (AB2 ) = (B1 A) B2 = IB2 = B2 .
7.5.11. Denition. 1n este o matrice coloan
a de dimensiune n si cu toate elementele egale cu 1. enij este
matricea p
atratic
a de dimensiune n care are pe locul (i; j) valoarea 1 si 0 n rest
7.5.12. Remark. Se observ
a c
a
8
< 0 dac
a j 6= k
n n
eij ekl =
: en dac
a j = k:
il
Tijn (a) = In + aenij se numeste matrice elementara (transformare elementara), pentru i 6= j; nmultirea la
stnga a unei matrici (nu neap
arat p
atratice, de dimensiune (n; m)) cu matricea elementar
a Tijn (a) are ca
rezultat o nou
a matrice (tot de dimensiune (n; m)), ale c
arei linii corespund cu liniile vechii matrici, mai
putin linia i care este nlocuit
a cu valoarea obtinut
a prin adunarea la vechea linie i a liniei j nmultit
a cu
a (liniai + a liniaj ! liniai ). Tijn (a) A este rezultatul operatiei elementare (ntre linii): se adun
a la linia
i linia j nmultit
a cu a si rezultatul se scrie pe linia i (operatie de atribuire).
0
1
0
10
1 0
1
0
0
0
a
a
a
a
a
a11
B1 0 0 0 C
11
12
13
14
15
B
CB
C B
B
C
B
C
B
C
B
B 0 1 0 aC
B0 1 0 aC Ba21 a22 a23 a24 a25 C Ba21 + aa41 a2
B
C 4
B
CB
C B
4
7.5.13. Example. T24
(a) = B
CB
C=B
C; T24 (a) A =B
B0 0 1 0C Ba
C B
B0 0 1 0 C
a
a
a
a
a31
B
C
B
C B
31
32
33
34
35
B
C
@
A
@
A @
@
A
0 0 0 1
a41 a42 a43 a44 a45
a41
0 0 0 1
ATijm (a) este rezultatul operatiei elementare (ntre coloane): se adun
a la coloana j coloana i nmultit
a cu
0
1
0
1
1 1 0 0 0
0
1
B1 0 0 0 C 0
C
B
C a11 a12 a13 a14 B
C Ba11 a12 + aa13 a13 a14 C
CB
B0 1 0 0 C B
CB
C
0 1 0 0C
B
C B
B
C B
4
a. T32 (a) = B
B
C=B
C; B
a21 a22 a23 a24 C
a21 a22 + aa23 a23 a24 C
B
C
B
C
C
B0 a 1 0 C @
AB
A
B0 a 1 0C @
B
C
A
a31 a32 + aa33 a33 a34
@
A a31 a32 a33 a34 @
0 0 0 1
0 0 0 1
220
7. REVIEWS
ele a liniei i si a liniei j, linia j ind cu semn schimbat (elementul de sub diagonala principal
a este
1)
(poate privit
a si ca matricea obtinut
a din matricea unitate prin permutarea ntre ele a coloanelor i si j,
cu elementul de sub diagonala principal
a de valoare
1).
7.5.20. Remark. Qnij = Tijn ( 1) Tjin (1) Tijn ( 1) In (deci Qnij este matrice de transformare), adic
a se fac
succesiv urm
atoarele operatii asupra matricii identitate:
(1) se scade din linia i linia j si se pune rezultatul n locul liniei i,
(2) se adun
a la linia j linia i si se pune rezultatul n locul liniei j,
(3) se scade din linia i linia j si se pune rezultatul n locul liniei i.
n
n
7.5.21. Remark. Rij
= Tjin ( 1) Tijn (1) Tjin ( 1)(deci Rij
este matrice de transformare), adic
a se fac suc-
cesiv urm
atoarele operatii asupra matricii identitate:
(1) se scade din linia j linia i si se pune rezultatul n locul liniei j,
(2) se adun
a la linia i linia j si se pune rezultatul n locul liniei i,
221
Ba11
B
Ba
B 21
@
a31
a12
a13
a22
a23
a32
a33
1
B 1
a14 C B
CB
B 0
a24 C
CB
AB
B 0
a34 @
0
0
0
1
0
0 C 0
C B a11
1 0 C
C B
C=B
B a21
0 0 C
C @
A
a31
0 1
0
a13
a12
a23
a22
a33
a32
1
a13 C
C
a33 C
C
C;
a23 C
C
A
a43
0
Ba11
B
Ba
B 21
@
a31
a11
a12
a31
a32
a21
a41
a22
a42
1
a14 C
C
a24 C
C;
A
a34
B
B
B
B
7.5.23. Remark. Exist
a o matrice de transformare care transform
aB
B
B
@
a12
a13
a1
a22
a23
a2
a32
a33
a3
1
a13 C
C
a33 C
C
C;
a23 C
C
A
a43
a1 C
B 1 C
B C
C
B 0 C
a2 C
B C
C
n
=
6
0
n
B . C(= en1 ).
C
R
.. C
B .. C
. C
B C
@ A
A
an
0
cu 1
j1 <
0 :: 0 1 b1j1 +1 :: ::
::
:: ::
::
:: ::
::
::
:: ::
::
0 :: 0 0
::
0 1 b2j2 +1 :: ::
0 :: 0 0
::
0 0
::
0 1 b3j3 +1 :: ::
::
:: :: :: ::
::
:: ::
::
:: ::
::
:: ::
::
0 :: 0 0
::
0 0
::
0 0
::
0 1 brjr +1
0 :: 0 0
::
0 0
::
0 0
::
0 0
::
:: :: :: ::
::
:: ::
::
:: ::
::
:: ::
::
0 :: 0 0
::
0 0
::
0 0
::
0 0
::
:: b1m C
C
:: b2m C
C
C
:: b3m C
C
C
C
:: :: C
C
C
:: brm C
C
C
:: 0 C
C
C
C
:: :: C
A
:: 0
n scar
a pe linie a matricii initiale) (engl. row echelon form); se caracterizeaz
a prin urm
atoarele: primele
j1
1 si liniile 2 pn
a
, (r; jr ) formeaz
ao
pseudodiagonal
a (scar
a), iar elementele la stnga si sub aceast
a pseudodiagonal
a sunt nule) (num
arul de
zerouri de la nceputul ec
arei linii creste strict odat
a cu indicele de linie).
222
7. REVIEWS
1; m
j1
qed.
7.5.25. Remark. matricea A si orice matrice esalon a ei au acelasi rang.
7.5.26. Remark. O alt
a form
a de tip esalon este forma e
salon redusa pe linie (reduced row echelon form),
care are urm
atoarele propriet
ati:
(1) Num
arul de zerouri de la nceputul ec
arei linii creste odat
a cu indicele liniei.
(2) Primul element nenul al ec
arei linii este egal cu 1.
(3) Fiecare coloan
a care contine prima valoare nenul
a a unei linii are celelalte elemente nule.
7.6. Determinants
a11 a12
= a11 a22
a12 a21
a21 a22
a11 a12 a13
a21 a22 a23
223
A = (aij )i=1;n;j=1;m ;
cij = aij ; 8i = 1; n; j = 1; m
7.7.3. Denition. Suma matriciala (engl. sum of matrices):
A = (aij )i=1;n;j=1;m ; B = (bij )i=1;n;j=1;m ; C = A + B
C = (cij )i=1;n;j=1;m ; cij = aij + bij ; 8i = 1; n; j = 1; m
7.7.4. Denition. Transpunere (engl. transpose):
A = (aij )i=1;n;j=1;m ; C = A0 = AT = t A
C = (cij )i=1;n;j=1;m ; cij = aji ; 8i = 1; n; j = 1; m
7.7.5. Denition. Urma (Trace) unei matrici p
atratice:
T r (A) =
n
X
aii
i=1
T r (AB) =
n
P
l=1
cll =
n
P
l=1
m
P
aki bjk ;
k=1
alk bkl
k=1
m
P
k=1
n
P
alk bkl
l=1
m
P
dkk = T r (BA)
k=1
"( )
2Sn
n
P
aik
ik
Qn
k=1 ak (k)
k=1
n
P
akj
kj (dezvoltarea
k=1
determinantului dup
a coloana j), unde:
7.7.9. Denition. complementul algebric al locului (pozitiei) (i; k) este:
ik
= ( 1)i+k dik
P Qn
2Sn
k=1 ak (k)
224
7. REVIEWS
B = C; cij =
aij
; 8i = 1; n; j = 1; m
bij
A:
bij
bij
; 8i = 1; n; j = 1; m
astfel: ecare loc al matricii A este ocupat de elementul de pe locul (i; j) nmultit cu matricea B)
3
2
A 0
5
7.9.2. Denition. Sum
a direct
a (engl. direct sum):A B = 4
0 B
2
obtinem
2
AB = 4
A11 A12
A21 A22
3
5
5, B = 4
B11 B12
B21 B22
3
5
Proof. Pentru A = (aij )i=1;n;j=1;m , B = (bjk )j=1;m;k=1;p ; produsul este C = (cik )i=1;n;k=1;p , unde
m
P
cik =
aij bjk . Partitionarea matricii A este:
j=1
2
3
12
A11 = a11
;
A
=
a
;
12
ij i=1;n1 ;j=m1 +1;m
ij i=1;n1 ;j=1;m1
4
5
21
22
A21 = aij i=n +1;n;j=1;m ; A22 = aij i=n +1;n;j=m +1;m
1
1
1
1
Analog
se
descrie
parti
t
ionarea
matricii
B:
2
3
4
B11 = b11
jk
B21 =
Din cik =
j=1;m1 ;k=1;p1
b21
jk j=m1 +1;m;k=1;p1
m1
m
P
P
aij bjk =
j=1
j=1
m1
P
j=1
B12 = b12
jk
11
a11
ij bjk +
m
P
m1
P
; B22 =
m
P
aij bjk +
j=1
j=m1 +1
21
a12
ij bjk
m
P
b22
jk j=m1 +1;m;k=p1 +1;p
k=m1 +1
aij bjk +
aij bjk
j=m1 +1
i=1;n ;k=1;p
i=1;n1 ;k=1;p1
m1
P
j=1
12
a11
ij bjk +
m
P
j=m1 +1
22
a12
ij bjk
m1
P
j=1
11
a21
ij bjk +
m
P
j=m1 +1
j=1
12
a21
ij bjk +
m
P
j=m1 +1
21
a22
ij bjk
m1
P
22
a22
ij bjk
aij bjk
i=n1 +1;n;k=1;p1
=
i=1;n1 ;k=p1 +1;p
=
i=n1 +1;n;k=1;p1
m
P
aij bjk +
j=m1 +1
j=1
aij bjk
j=m1 +1
aij bjk +
j=1
aij bjk +
j=1
m1
P
m
P
225
aij bjk
j=m1 +1
=
i=n1 +1;n;k=p1 +1;p
Am demonstrat c
a AB = 4
A12
A11
3
5
5=4
A11 A22
A11 A12
A21 A22
A21 A12
5=4
A11 A22
A12 A21
A21 A22
A21 A12
A11 A22
A11 A12
A22 A11
A21 A12
Dac
a A11 si A12 comut
a are loc:
2
4
A11 A12
A21 A22
32
54
A22
A12
A21
A11
Dac
a A21 si A22 comut
a avem:
2
4
A11 A12
A21 A22
2
32
54
A11 0
A21 I
A22
A12
A21
A11
5=4
2
0
I
A21 A11
5 = det A11 .
3
5
Proof. Se dezvolt
a determinantul dup
a coloanele (respectiv liniile) corespunz
atoare matricii unitate;
A11 0
0
32
54
0 A22
5=4
A11
A22
3
5
226
7. REVIEWS
A11
A22
A11
det 4
A21 A22
Proof. Se observ
a c
a
asa c
a
2
4
A11
det 4
7.9.8. Remark. det 4
A21 A22
A12
A21 A22
5 = det 4
A11 0
0
5 det 4
0 A22
A21 A22
3
5 = det 4
A11 0
5=4
A21 I
5 = det (A22
54
A21 I
A11 0
32
0 A22
5 det 4
0 A22
5;
A21 A12 ).
Proof.
Se observ
a c
a3
3
2
2
2
3
2
3
0
I
A12
I
0
I A12
I A12
I
5 det 4
5 = det 4
5 = det 4
5 si se obtine det @
det 4
A21 I
0 A22 A21 A12
A21 A22
A21 A22
A21
det (A22 A21 A12 )
2
3
A11 A12
5 si A11 sunt p
7.9.9. Remark. Dac
aA = 4
atratice inversabile, are loc: det A = det A11 det A22 A2
A21 A22
2
Proof. 4
A11 A12
5=4
A11 0
32
54
A111 A12
A21
A22
A21 A22
0 I
3
A11 A12
5 = det A11 det A22 A21 A111 A12
) det 4
A21 A22
2
n2 Q
n1
Q
i=1 k=1
2
4
In1
C In
In1 B
C
D
2
0
n1
32
54
In1 B
C
5)
C In
2
5=4
n1
5. Rezult
a
In1
C +C D
B
CB
5=4
I
0 D
B
CB
3
5
I
0 D
B
CB
5.
227
32
23
3 2
1
0
0
0
0
a13
a14
a14 7 6 1
76 1
67
76
67
7 6
6
670
6
1
0 07
1
a23
a24
1 a23 a24 7
76 0
67
7 6 0
7.9.11. Example.
76
67 ;
7==6
6
67
6
a32 1 07
a32 a33 a34 7
76 0
7 6 0 a32 a33 a13 a31 a34 a31 a14670
54
45
5 4
0
0
0 1 a41
a41 a42
a43
a44
a42 a43 a44
32
23
2
0
a13
a14
0 0 07 6 1
0
a13
a14
67 1
6 1
76
67
6
6
67 0
6 0
1
a23
a24
1 0 07
1
a23
a24
76 0
67
6
=6
76
67 ;
6
6 0
7
0 a33 a13 a31 a23 a32 a34 a31 a14 a3
0 1 07
0 a33 a13 a31 a23 a32 a34 a31 a14 a32 a246
76 0
67 0
6
54
45
4
a41 a42
a43
a44
a41 0 0 1
a41 a42
a43
a44
32
23
2
a13
a14
0
0 07 61 0
a13
a14
671
61 0
76
67
6
6
670
60 1
a23
a24
1
0 07
a23
a24
7 60 1
67
6
=6
76
67 ;
6
7
60 0 a
0
1 07
a13 a31 a23 a32 a34 a31 a14 a32 a246
7 60 0 a33 a13 a31 a23 a32 a34 a31 a14 a32 a2
670
6
33
54
45
4
0 a42
a43 a13 a41
a44 a14 a41
0
a42 0 1
0 a42
a43 a13 a41
a44 a14 a41
3
2
a13
a14
7
61 0
7
6
7
60 1
a23
a24
7
6
a c
a:
=6
7 : Se observ
60 0 a
a13 a31 a23 a32 a34 a31 a14 a32 a24 7
7
6
33
5
4
0 0 a43 a13 a41 a23 a42 a44 a14 a41 a24 a42
32
07 6 1
76
6
1 0 07
76 0
76
6
0 1 07
7 6a31
54
0 0 1 a41
6 1
6
6 0
6
6
6 a
6 31
4
0
2
a
4 33
a43
a13
a13 a31
a23 a32
a34
a31 a14
a13 a41
a23 a42
a44
a14 a41
Forma transform
arii:
2
61
6
60
6
6
60
6
4
0
32
07 6
76
6
0 07
76
76
6
1 07
76
54
0 1
0
1
0
a42
A
2
Proof. Fie B = 4
8
>
>
B11
>
>
>
>
>
< B12
>
>
B21
>
>
>
>
>
: B22
B21 B22
0
a41
A21 A22
2
5, C = A22
a44
2
a
4 31
a41
32
07 6
76
6
0 07
76
76
6
1 07
76
54
0 1
0
1
a32
0
32
a
5 4 13
a42
a23
a32
a14
a24
3 2
07 6
7 6
6
1 0 07
7 6
7==6
6
0 1 07
7 6
5 4
0 0 1
0
a31
0
A21 A111
A111 A12 C
C
5:
5 = AD. Obtinem
A21 A111
+ A12 C
1
A21 A111
+ A22 C
5:
1
a31
a32
a41
a42
3
07
7
0 07
7
7
1 07
7
5
0 1
0
a34
A21 A111
32
07 61
76
6
1 0 07
7 60
76
6
0 1 07
7 60
54
0 0 1
0
0
A11 A12
=4
B11 B12
2
a
5 = 4 33
a24 a42
a43
a32 a24
A12 C
A12 C
A22 C
A21 A = I
+ A12 C
1
=0
A21 A111 = 0
=I
228
7. REVIEWS
pentru D = 4
A111
Rezult
a B = AD = 4
A21 A111
I 0
0 I
5 si A
A111 A12 C 1
5.
= D.
2
AT1
AT2
5A=4
A11 A12
A21 A22
5 ) AT = 4
AT11
AT21
AT12
AT22
3
5
A = A AA = (A A) A = AT (A ) A = (AB A) (A ) A = AT (B ) AT (A ) A = AT (B ) (A
T
AT (B ) A AA = AT (B ) A =
T
= (B A) A = B AA = B (AA ) = B (A ) AT = B AB (A ) AT = B (AB ) (A ) AT =
T
B (B ) AT (A ) AT = B (B ) (AA A) =
T
= B (B ) AT = B (AB ) = B AB = B .
a si pozitiv semidenit
a.
7.10.3. Remark. Fie A 2 M(m;k) (R); atunci AT A 2 M(k;k) (R) este simetric
(k;m)(m;k)
0 , xT A T A x
0 adic
a AT A 2 M(k;k) (R) este pozitiv semidenit
a; pentru c
a AT A
= AT A, rezult
a c
a matricea AT A este si simetric
a.
(m;r)
(r;r)
(r;k)
229
d22
drr > 0)
(r;m)
U = VT
(m;r)
V = Ir iar U
(r;k)
(k;r)
(r;r)
(m;r)
U T = Im , U T
(r;m)
(r;m)
U = Ir ).
(m;r)
(k;m)
= V D
U T , unde
a si pozitiv semidenit
a. 9W 2
Proof. Fie A 2 M(m;k) (R); atunci AT A 2 M(k;k) (R) este simetric
(k;m)(m;k)
(asa c
a W1 W1T = Ik
W2 W2T ),
(k;r) (r;k)
(k;k r)(k r;k)
2
3
2
3
T
W1
G 0
5 AT A [W1 W2 ] = 4 1
5,
iar 4
T
W2
0 0
2
3
2
3
T
T
6 W1 A A 7
G1 0
6
7
(r;k)
5,
6
7 [W1 W2 ] = 4
4 W T AT A 5
0 0
2
(k r;k)
2
3 2
T
T
T
T
6 W1 A A W1 W1 A A W2 7 6 G1
6
7 6 (r;r)
(r;r)
(r;k r)
6
7=
4 W T AT A W W T AT A W 5 4 0
1
(k r;r)
(k r;k r)
(k r;k)
(r;k r)
(k r;k r)
7
7,
5
230
7. REVIEWS
8
>
W1T AT A W1 = G1
>
>
>
(r;r)
>
(r;r)
>
>
>
>
T
T
>
>
< W1 A A W2 = (r;k0 r)
(r;k r)
; n particular W2T AT A W2 =
>
>
W2T AT A W1 = 0
>
>
(k r;k)
>
(k r;r)
>
>
>
>
>
T
T
>
0
: W2 A A W2 = (k r;k
r)
(k r;k r)
(k r;k r)
(k r;k r)
(k r;k r)
(k r;k r)
AW2 = 0(m;k
r)
Se denesc:
p
p
D = G1 (i.e. D =
gii
i=1;r
(r;r)
descresc
atoare).
V = W1
(k;r)
U = AV D
(m;r)
U T U = (AV D 1 ) U = (D 1 ) V T AT AV D
=D
W1T AT AW1 D
= D 1 G1 D
=
p
p
= diag gii gii gii
=A
= Ir ,
i=1;r!
W2 W2T
(k;k r)(k r;k)
r)
z }| {
A W2 W2T = A
0(m;k
(k;k r) (k r;k)
1 T
r)
W2T = A
(k r;k)
Matricea V D U satisface:
V D 1U T
1) A V D 1 U T A = U DV T
= UD V T V D
2) V D 1 U T
U DV T =
U T U DV T = U DIr D 1 Ir DV T = U DV T = A
U DV T
V D 1U T = V D
U T U D V T V D 1U T =
= V D 1 Ir DIr D 1 U T = V D 1 U T = A
3) U DV T
)
V D 1 U T = U D V T V D 1 U T = U DIr D 1 U T = U U T
U DV T
3) V D 1 U T
)
V D 1U T
V D 1U T
= UUT
U DV T = V D 1 Ir DV T = V V T
U DV T
= VVT
231
identice mai putin eventual semnul, coloanele matricilor U si V sunt e egale (pentru r
ad
acini pozitive) e
cu semn invers (pentru r
ad
acini negative). Asadar, dac
a A este pozitiv semidenit
a, are o descompunere
SVD A = U DU T cu U cu coloane ortogonale iar D pozitiv diagonal
a.
1
(inversa generalizat
a n sens
la dreapta ) AA
) AA A = Aj A
la stnga ) A A = I; analog
y
(m;1)
are o solutie ,
multimea plan
a a tuturor solutiilor este multimea vectorilor x = A
(k;1)
= A A
(m;1)
y + [Ik
(k;m)(m;1)
y ; mai mult,
(m;k)(k;m)(m;1)
A ] z 8z 2 Rk .
(k;m)(m;k) (k;1)
A A]z) = AA y + A[I
A A]z = y + Az
AA Az =
Az = y.
AA A = Aj A la stnga ) (A A) = A A
7.11. Algebraic Structures
7.11.1. Denition. Fiind dat
a o multime nevid
a M , este numit
a lege de compozitie pe M orice functie
'( ; ) : M
232
7. REVIEWS
dac
a: 8x 2 M , x e = e x = x.
se spune c
a are element neutru pe M dac
a: 9e 2 M , 8x 2 M , x e = e x = x.
ar exista dou
a elemente neutre, e si f . Atunci:
x = x.
e = e f = f , iar
e=e
o lege asociativ
a si cu element neutru pe M . Un element x 2 M este numit
dac
a: 9x0 2 M , x
x0 = x0
este asociativ
a si cu element neutru.
Se presupune c
a pentru x 2 M ar exista dou
a simetrice n raport cu legea :
x x0 = x0 x = e si
x x00 = x00 x = e. Atunci:
x0 = x0 e = x0 (x x00 ) = (x0 x) x00 = e x00 = x00 ,
asa c
a cele dou
a elemente simetrice sunt egale.
7.11.9. Remark. Fie x 2 M un element simetrizabil. Atunci si simetricul lui x, x0 2 M , este simetrizabil,
iar simetricul lui este x: (x0 )0 = x.
Proof. Dac
a ar exista, elementul (x0 )0 ar trebui s
a satisfac
a relatia: (x0 )0 x0 = x0 (x0 )0 = e [I].
Din denitia simetrizabilit
atii lui x se stie c
a elementul x0 exist
a si satisface relatia: x x0 = x0 x = e
[II].
233
y 2 M este tot
este asociativ
a si cu element neutru.
M este numit
a
234
7. REVIEWS
(2) 8x 2 H, x0 2 H.
7.11.15. Remark. Dac
a H este subgrup al lui (M; ), atunci:
(1) e 2 H;
(2) (H; jH ) este grup.
Proof. 1.
2.
7.11.16. Denition. Se consider
a dou
a grupuri (M; ) si (N; ). O functie f ( ) : M ! N este numit
a:
(1) Morsm de grupuri (de la (M; ) la (N; )), dac
a 8x; y 2 M , f (x y) = f (x)
f (y) [functia
"transport
a" rezultatul operatiilor din domeniu n rezultatul operatiilor din codomeniu].
(2) Izomorsm de grupuri, dac
a f ( ) este morsm si este bijectiv
a [functia "transport
a biunivoc"
rezultatul operatiilor din domeniu n rezultatul operatiilor din codomeniu]. Dac
a M = N si
= , atunci un morsm (izomorsm) mai este numit si endomorsm (automorsm).
7.11.17. Remark. Dac
a f ( ) este morsm de la (M; ) la (N; ), atunci:
(1) f (eM ) = eN [eM este elementul neutru al grupului (M; ) iar eN este elementul neutru al grupului
(N; )]
(2) f (x0 ) = (f (x))0 [n membrul stng simetricul este n (M; ) iar n membrul drept simetricul este
n (N; )]
(3) Dac
a f ( ) este izomorsm de la (M; ) la (N; ), atunci f
(M; ).
Proof. 1.
2.
3.
7.11.18. Denition. Se consider
a multimea A mpreun
a cu dou
a legi de compozitie abstracte, notate
aditiv si multiplicativ. Tripletul (A; +; ) este numit inel dac
a:
(1) (A; +) este grup comutativ (abelian);
(2) (A; ) este monoid;
(3) 8x; y; z 2 A, x (y + z) = x y + x z, (y + z) x = y x + z x [nmultirea este distributiv
a la
stnga si la dreapta fata de adunare] [elementul neutru fata de operatia notat
a aditiv este notat
0] [elementul neutru fata de operatia notat
a multiplicativ este notat 1] [simetricul unui element x
fata de operatia notat
a aditiv este notat
b nseamn
a a + ( b)]
235
a;
(3) x0 = 0x = 0
(4) 1 6= 0
(5) ( x) y = x ( y) =
(6) x (y
z) = xy
xy si ( x) ( y) = xy [regula semnelor]
xz si (y
z) x = yx
zx
(7) Dac
a (A; +; ) este domeniu de integritate, atunci: xy = xz si x 6= 0 ) y = z [se poate simplica
la stnga cu un element nenul, chiar dac
a acesta nu este inversabil] yx = zx si x 6= 0 ) y = z [se
poate simplica la drepta cu un element nenul, chiar dac
a acesta nu este inversabil]
7.11.20. Denition. Se consider
a dou
a inele (A; +; ) si (B; +0 ; 0 ). O functie f ( ) : A ! B este numit
a
morsm de inele dac
a:
(1) f (x + y) = f (x) +0 f (y),
(2) f (x y) = f (x) 0 f (y)
(3) f (1) = 10 . Dac
a, n plus, functia f ( ) este si bijectiv
a, atunci f ( ) este numit
a izomorsm.
7.11.21. Remark. Dac
a f ( ) este morsm de inele, atunci:
(1) f (0) = 00 ,
(2) f ( x) =
f (x)
(3) Dac
a x este inversabil n (A; +; ), atunci si f (x) este inversabil n (A0 ; +0 ; 0 ) si (f (x))
10
= f (x 1 ).
236
7. REVIEWS
x0 ) f (1) = 10 [pentru c
a x0 nu este divizor al
lui 0]
7.11.23. Denition. Un inel (K; +; ) este numit corp dac
a 0 6= 1 si orice element nenul admite invers:
8x 2 K, x 6= 0, 9x
2 K.
2, p2 = t2
2t, p3 =
t2 + 1, p4 = t2
4t + 1, p5 = 2t2 + 1
U = 1V .
U = 1V .
Show that:
(0; 5p) a) X1 = fx + U (x) ; x 2 Vg is a vector subspace in (V; K).
(0; 5p) b) X2 = fx
(1p) c) V = X1
238
(1p) VI. State the denitions of the objects involved and prove that the composition of two linear transformations is a linear transformation.
Note: The grade will be 1p+ "the points obtained from each exercise" (for the quality of the explanation).
8.1.1. Solution.
I. For the basis E = ft2 ; t; 1g in (R2 [t] ; R), the representations of the polynomials are:
2
3
2
3
2 3
2
3
2
3
1
1
1
1
2
6
7
6
7
6
7
6
7
6 7
6
7
6
7
6
7
6
7
6 7
[p1 ]E = 6 2 7, [p2 ]E = 6 2 7, [p3 ]E = 6 0 7, [p4 ]E = 6 4 7, [p5 ]E = 6 0 7;
4
5
4
5
4
5
4
5
4 5
2
0
1
1
1
1
a) The determinant
1
2
0
3
7
7
2 0
4 7 is 2
5
0
1
1
) A basis for X1 is fp2 ; p3 g [X1 = span (p2 ; p3 )] and dim X1 = 2.
3
2
1 2
7
6
1 2
7
6
b) The minor
= 4 6= 0 ) the matrix 6 2 0 7 has rank 2
5
4
2 0
2 1
) A basis for X2 is fp1 ; p5 g [X2 = span (p1 ; p5 )] and dim X2 = 2.
2
3
1
1 2
1
1 1 2
6
7
6
7
c) The minor
2 0 0 = 6 6= 0 ) the matrix 6 2 0
4 0 7 has rank 3
4
5
0
1 1
0
1
1 1
) A basis for X1 + X2 is fp2 ; p3 ; p5 g, dim (X1 + X2 ) = 3, X1 + X2 = R2 [t]
6
6
the matrix 6
4
dim X1 \ X2 = 2 + 2 1.
2
3
2
3
2 3
1
6
7
1
1
6
5; [p1 ]
4 5; [p1 ] = 6 2 7
f) p1 2 X1 \ X2 ; [p1 ]fp1 g = [1]; [p1 ]fp2 ;p3 g = 4
=
7.
fp1 ;p5 g
E
4
5
2
0
2
g) The sum is not direct, because the intersection has dimension 1.
Because p1 2 X1 \ X2 , then p1 2 X1 and p1 2 X2 which is a subspace, so
p1 2 X2
239
U ) ( ) = 1V ( ) () U (U (x)) = x, 8x 2 V.
injectivity:
Consider x1 , x2 2 V such that U (x1 ) = U (x2 ) ) U (U (x1 )) = U (U (x2 )) ) x1 = x2 (unique)
surjectivity:
Consider y 2 V; then U (U (y)) = y and with xy := U (y), we get:
8y 2 V, 9xy 2 V [xy = U (y)] such that U (xy ) = y.
III.
a) y1 , y2 2 X1 = fx + U (x) ; x 2 Vg ) 9x1 , x2 2 V such that y1 = x1 + U (x1 ) and y2 = x2 + U (x2 ).
Then y1 + y2 = x1 + U (x1 ) + x2 + U (x2 ) = (x1 + x2 ) + U (x1 + x2 ) 2 X1 .
For
2 R, y1 =
b) y1 , y2 2 X2 = fx
Then y1 + y2 = x1
For
(x1 + U (x1 )) = ( x1 ) + U ( x1 ) 2 X1 .
U (x) ; x 2 Vg ) 9x1 , x2 2 V such that y1 = x1
U (x1 ) + x2
U (x2 ) = (x1 + x2 )
U (x1 ) and y2 = x2
U (x2 ).
U (x1 + x2 ) 2 X2 .
2 R, y1 =
(x1 U (x1 )) = ( x1 ) U ( x1 ) 2 X2 .
x + U (x) x U (x)
c) For x 2 V, x =
+
2 X1 + X2 because:
2
2
x + U (x)
x + U (x) 2 X1 (subspace) )
2 X1 and
2
x U (x)
2 X1 .
x U (x) 2 X2 (subspace) )
2
So X1 + X2 = V.
Consider y 2 X1 \ X2 ) 9x1 , x2 2 V such that y = x1 + U (x1 ) = x2
U (x2 ) )
U (x2 )) = U (x2 )
U (U (x2 )) = U (x2 )
x2 =
y ) y = 0.
Since the sum covers the space and their intersection is null, X1
a) the system
8
>
>
>
<
>
>
>
:
8
>
>
x1 =
>
>
>
x1 + x2 = 0
>
>
< x2 =
has
the
solutions:
x1 + x3 = 0
>
>
x3 =
>
>
>
x1 + x4 = 0
>
>
: x4 =
1)g is a basis and the dimension is 1:
X2 = V.
x1 + x2
6
6
= 6 x1 + x3
4
x1 + x4
2R
7 6
7 6
7=6
5 4
6 x1 7
6
7
7 6 x2 7
76
7
7
1 0 1 0 76
56 x 7
6 3 7
5
1 0 0 1 4
x4
1 1 0 0
240
6
6
b) The matrix 6
4
y2 , y3 .
8
>
x + x2 = y 1
1 1 0 0
>
>
< 1
7
7
x1 + x3 = y2 is compatible for each y1 ,
1 0 1 0 7 has rank 3, so the system
>
5
>
>
:
x1 + x4 = y 3
1 0 0 1
3
241
4x1 x2
2x2 x3
x21 + x23
2x1 x2
4x2 x3
242
(1p) a) S
a se determine o baz
a ortogonal
a a lui X.
(1p) b) S
a se determine proiectia vectorului v = ( 2; 3; 1) pe X.
243
0; 3 = 3puncte)
a operatorului liniar U : X ! X.
dim Y + dim (X \ Y );
dim Y
11.
(0; 3p) 3) Fie X, Y subspatii liniare ale spatiului (R2 ; R). Atunci:
(a) dim (X \ Y ) = dim (X + Y )
dim Y ;
dim X
2 dim Y .
3 dim ker U ;
dim Im U
3 dim Im U ;
x;
(c) 8 2 R, U (2 x + x) = 2 x;
244
(d) 8 2 R, U ( x + x) = x;
(e) 9 2 R, astfel nct U (2x) 6= 2 x.
(0; 3p) 6) Fie U : R3 ! R3 un operator liniar a c
arui matrice n reperul canonic al lui (R3 ; R) este A:
Atunci:
6
6
(a) Dac
aA=6 0
4
0
2
2 1
6
6
(b) dac
aA=6 0 2
4
0 0
2
0
6
6
(c) dac
aA=6 0
4
3
2
2 1
6
6
(d) dac
aA=6 0 2
4
0 0
2
1 0
6
6
(e) dac
aA=6 0 2
4
0 0
2
0
1
7
7
a Jordan;
1 7, atunci U are forma canonic
5
2
7
7
a Jordan;
0 7, atunci U are forma canonic
5
2
3
0
1
7
7
a.
2 0 7, atunci U are forma diagonal
5
0
2
3
0
7
7
a Jordan;
1 7, atunci U are forma canonic
5
2
3
0
7
7
0 7, atunci U nu este diagonalizabil.
5
2
Din [10]:
Din [17]:
Din [30]:
245
246
g
asirea formei canonice Jordan si a bazei Jordan pentru un operator.
discutarea naturii unei functionale p
atratice dup
a un parametru
studierea unei functionale liniare
alte subiecte.
247
248
Bibliography
[1] Allen, Roy, George, Douglas: "Mathematical Analysis for Economists", MacMillan and Co., London, 1938.
[2] Andreescu, Titu; Andrica, Dorin; "Complex Numbers from A to...Z", Birkhuser, Boston, 2006.
[3] Anton, Howard; Rorres, Chris: "Elementary Linear Algebra Applications version", Tenth edition, 2010, Wiley.
[4] B
adin, Luiza; C
arpusc
a, Mihaela; Ciurea, Grigore; S
erban, Radu: "Algebr
a Liniar
a Culegere de Probleme", Editura
ASE, 1999.
[5] Bellman, Richard: Introducere n analiza matricial
a, (traducere din limba englez
a), Editura Tehnic
a, Bucuresti,
1969. (Titlul original: Introduction to Matrix Analysis, McGraw-Hill Book Company, Inc., 1960)
[6] Benz, Walter: "Classical Geometries in Modern Contexts - Geometry of Real Inner Product Spaces" - Second Edition,
Springer, 2007.
[7] Blair, Peter, D.; Miller, Ronald, E.: "InputOutput Analysis: Foundations and Extensions", Cambridge University
Press, 2009.
[8] Blume, Lawrence; Simon, Carl, P.: "Mathematics for Economists", W. W. Norton & Company Inc., 1994.
[9] Bourbaki, N.: Elements de mathematique, Paris, Acta Sci. Ind. Herman, Cie, 1953.
[10] Busneag, Dumitru; Chirtes, Florentina; Piciu, Dana: "Probleme de Algebr
a Liniar
a", Craiova, 2002.
[11] Burlacu, V., Cenusa, Gh., S
acuiu, I., Toma, M.: Curs de Matematici, Academia de Studii Economice, Facultatea de
Planicare si Cibernetic
a Economic
a, remultiplicare, uz intern, Bucuresti, 1982.
[12] Chirita, Stan: "Probleme de Matematici superioare", Editura Didactic
a si Pedagogic
a, Bucuresti, 1989.
[13] Chitescu, I.: Spa
tii de func
tii, Editura stiintic
a si enciclopedic
a, Bucuresti, 1983.
[14] Colojoar
a, I.: Analiza matematic
a Editura didactic
a si pedagogic
a, Bucuresti, 1983.
[15] Cr
aciun, V. C.: Exerci
tii
si probleme de analiz
a matematic
a, Tipograa Universit
atii Bucuresti, 1984.
[16] Cristescu, R.: Analiza func
tional
a, Editura stiintic
a si enciclopedic
a, Bucuresti, 1983.
[17] Dr
agusin, C., Dr
agusin, L., Radu, C.: Aplica
tii de algebr
a, geometrie
si matematici speciale, Editura Didactic
a
si Pedagogic
a, Bucuresti, 1991.
[18] Glazman, I. M., Liubici, I. U.: Analiza liniar
a pe spa
tii nit dimensionale, Editura stiintic
a si Enciclopedic
a,
Bucuresti, 1980.
[19] Golan, Jonathan S.:The Linear Algebra a Beginning Graduate Student Ought to Know, Springer, third edition,
2012.
[20] Greene, William H.: "Econometric Analysis", sixth edition, Prentice Hall, 2003.
[21] Guerrien, B.: Algebre lineare pour economistes, Economica, Paris, 1991.
[22] Halanay, Aristide; Olaru, Valter Vasile; Turbatu, Stelian: "Analiz
a Matematic
a", Editura Didactic
a si Pedagogic
a,
Bucuresti, 1983.
[23] Holmes, Richard, B.: Geometric Functional Analysis and its Applications
[24] Ion D. Ion; Radu, N.: Algebr
a, Editura Didactic
a si Pedagogic
a, Bucuresti, 1991.
250
BIBLIOGRAPHY
[25] Kurosh, A.: Cours dalgebre superieure, Editions MIR, Moscou, 1980.
[26] Leung, KamTim: "LINEAR ALGEBRA AND GEOMETRY", HONG KONG UNIVERSITY PRESS, 1974.
[27] Ling, San; Xing, Chaoping: "Coding Theory A First Course", Cambridge University Press, 2004.
[28] McFadden, Daniel: Curs Economics 240B (Econometrics), Second Half, 2001 (class website, PDF)
[29] Monk, J., D.: Mathematical Logic, Springer-Verlag, 1976.
[30] Pavel, Matei: "Algebr
a liniar
a si Geometrie analitic
a culegere de probleme", UTCB, 2007.
[31] R
adulescu, M., R
adulescu, S.: Teoreme
si probleme de Analiz
a Matematic
a, Editura didactic
a si Pedagogic
a,
Bucuresti, 1982.
[32] Rockafellar, R.,Tyrrel: Convex Analysis, Princeton, New Jersey, Princeton University Press, 1970.
[33] Roman, Steven: "Advanced Linear Algebra", Third Edition, Springer, 2008.
[34] Saporta, G., S
tef
anescu, M. V.: Analiza datelor
si informatic
a cu aplica
tii la studii de pia
ta
si sondaje de