Sie sind auf Seite 1von 14

Mathematics for Econometrics,

Fourth Edition

1
Phoebus J. Dhrymes

May 2012

1
Phoebus
c J. Dhrymes, 2012. Preliminary material; not to be cited or disseminated without the author’s
permission. Comments and constructive suggestions are welcomed. email:pjd1@columbia.edu
2
Contents

1 Vectors and Vector Spaces 5


1.1 Complex Numbers and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.1 Polar Form of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.1 Basis of a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Subspaces of a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3
4 CONTENTS
Chapter 1

Vectors and Vector Spaces

In nearly all of the discussion in this volume, we deal with the set of real numbers. Occasionally,
however, we deal with complex numbers as well. In order to avoid cumbersome repetition, we shall
denote the set we are dealing with by F and let the context elucidate whether we are speaking of
real or complex numbers, or both.

1.1 Complex Numbers and Vectors


For the sake of completeness, we begin with a brief review of complex numbers, although it is
assumed that the reader is at least vaguely familiar with the subject.

A complex number, say z, is denoted by

z = x + iy,

where x and y are real numbers and the symbol i is defined by

i2 = −1. (1.1)

All other properties of the entity denoted by i are derivable from the basic definition in Eq. (1.1).
For example,
i4 = (i2 )(i2 ) = (−1)(−1) = 1.
Similarly,
i3 = (i2 )(i) = (−1)i = −i,
and so on.
It is important for the reader to grasp, and bear in mind, that a complex number is describable
in terms of an ordered pair of real numbers.
Let
zj = xj + iyj , j = 1, 2,

5
6 CHAPTER 1. VECTORS AND VECTOR SPACES

be two complex numbers. We say


z1 = z2

if and only if
x1 = x2 and y1 = y2 .

Operations with complex numbers are as follows.


Addition:
z1 + z2 = (x1 + x2 ) + i(y1 + y2 ).

Multiplication by a real scalar:

cz1 = (cx1 ) + i(cy1 ).

Multiplication of two complex numbers:

z1 z2 = (x1 x2 − y1 y2 ) + i(x1 y2 + x2 y1 ).

Addition and multiplication are, evidently, associative and commutative; i.e. for complex zj ,
j =1,2,3
z1 + z2 + z3 = (z1 + z2 ) + z3 and z1 z2 z3 = (z1 z2 )z3 ,

z1 + z2 = z2 + z1 and z1 z2 = z2 z1 .
and so on.
The conjugate of a complex number z is denoted by z̄ and is defined by

z̄ = x − iy.

Associated with each complex number is its modulus or length or absolute value, which is a
real number often denoted by |z| and defined by

|z| = (z z̄)1/2 = (x2 + y 2 )1/2 .

For the purpose of carrying out multiplication and division (an operation which we have not, as yet,
defined) of complex numbers, it is convenient to express them in polar form.

1.1.1 Polar Form of Complex Numbers


Let z, a complex number, be represented in Figure 1 by the point (x1 , y1 ), its coordinates.
It is easily verified that the length of the line from the origin to the point (x1 , y1 ) represents the
modulus of z1 , which for convenience we denote by r1 . Let the angle described by this line and
the abscissa be denoted by θ1 . As is well known from elementary trigonometry, we have
x1 y1
cos θ1 = , sin θ1 = . (1.2)
r1 r1
1.1. COMPLEX NUMBERS AND VECTORS 7

Figure 1.1:

We may write the complex number as

z1 = x1 + iy1 = r1 cos θ1 + ir1 sin θ1 = r1 (cos θ1 + i sin θ1 ).

Further, we may define the quantity

eiθ1 = cos θ1 + i sin θ1 , (1.3)

and thus write the complex number in the standard polar form

z1 = r1 eiθ1 . (1.4)

In the representation above, r1 is the modulus and θ1 the argument of the complex number
z1 . It may be shown that the quantity eiθ1 as defined in Eq. (1.3) has all the properties of real
exponentials insofar as the operations of multiplication and division are concerned. If we confine the
argument of a complex number to the range [0, 2π), we have a unique correspondence between
the (x, y) coordinates of a complex number and the modulus and argument needed to specify its
polar form. Thus, for any complex number z, the representations

z = x + iy, z = reiθ ,

where
x y
r = (x2 + y 2 )1/2 , cos θ = , sin θ = ,
r r
are completely equivalent.
In polar form, multiplication and division of complex numbers are extremely simple operations.
Thus,

z1 z2 = (r1 r2 )ei(θ1 +θ2 )


 
z1 r1
= ei(θ1 −θ2 ) ,
z2 r2
8 CHAPTER 1. VECTORS AND VECTOR SPACES

provided z2 6= 0 .
We may extend our discussion to complex vectors, i.e. ordered n -tuples of complex numbers.
Thus
z = x + iy

is a complex vector, where x and y are n -element (real) vectors (a concept to be defined imme-
diately below). As in the scalar case, two complex vectors z1 , z2 are equal if and only if

x1 = x2 , y 1 = y2 ,

where now xi , yi , i = 1, 2, are n -element (column) vectors. The complex conjugate of the vector
z is given by
z̄ = x − iy,

and the modulus of the complex vector is defined by

(z 0 z̄)1/2 = [(x + iy)0 (x − iy)]1/2 = (x0 x + y 0 y)1/2 ,

the quantities x0 x, y 0 y being ordinary scalar products of two vectors. Addition and multiplication
of complex vectors are defined by

z1 + z2 = (x1 + x2 ) + i(y1 + y2 ),
z10 z2 = (x01 x2 − y10 y2 ) + i(y10 x2 + x01 y2 ),
z1 z20 = (x1 x02 − y1 y20 ) + i(y1 x02 + x1 y20 ),

where xi , yi , i = 1, 2, are real n -element column vectors. The notation for example x01 , or y20
means that the vectors are written in row form, rather than the customary column form. Thus,
x1 x02 is a matrix, while x01 x2 is a scalar. These concepts (vector, matrix) will be elucidated below.
It is somewhat awkward to introduce them now; still, it is best to set forth at the beginning what
we need regarding complex numbers.

1.2 Vectors
Definition 1.1. Let1 ai ∈ F, i = 1, 2, . . . , n ; then the ordered n -tuple

a1
 
 a2 
a =  .. 
 
 . 
an

is said to be an n -dimensional vector. If F is the field of real numbers, it is termed an n -


dimensional real vector.
1 The symbol F is, in this discussion, a primitive and simply denotes the collection of objects we are dealing with.
1.2. VECTORS 9

Remark 1.1. Notice that a scalar is a trivial case of a vector whose dimension is n = 1 .
Customarily we write vectors as columns, so strictly speaking we should use the term column
vectors. But this is cumbersome and will not be used unless required for clarity.

If the elements of a vector, ai , i = 1, 2, . . . , n, belong to F, we denote this by writing

a ∈ F.

Definition 1.2. If a ∈ F is an n -dimensional column vector, its transpose is the n -dimensional


row vector denoted by
0
a = ( a1 , a2 , a3 , . . . , an ) .

If a, b are two n -dimensional vectors and a, b ∈ F, we define their sum by


 
a1 + b1
..
a+b= .
 
.
an + bn
If c is a scalar and c ∈ F, we define
ca1
 
 ca2 
ca =  ..  .
 
 . 
can
If a, b are two n -dimensional vectors with elements in F, their inner product (which is a scalar)
is defined by
a0 b = a1 b1 + a2 b2 + · · · + an bn .
The inner product of two vectors is also called their scalar product, and its square root is often
referred to as the length or the modulus of the vector.

Definition 1.3. If a, b ∈ F are n -dimensional column vectors, they are said to be orthogonal if
0 0 0
and only if a b = 0. If, in addition, a a = b b = 1, they are said to be orthonormal.

Definition 1.4. Let a(i) , i = 1, 2, . . . , k, be n -dimensional vectors whose elements belong to F.


Let ci , i = 1, 2, . . . , k, be scalars such that ci ∈ F. If
k
X
ci a(i) = 0
i=1

implies that
ci = 0, i = 1, 2, . . . , k,
the vectors {a(i) : i = 1, 2, . . . , k} are said to be linearly independent or to constitute a lin-
early independent set. If there exist scalars ci , i = 1, 2, . . . , k, not all of which are zero, such
10 CHAPTER 1. VECTORS AND VECTOR SPACES

Pk
that i=1 ci a(i) = 0, the vectors {a(i) : i = 1, 2, . . . , k} are said to be linearly dependent or to
constitute a linearly dependent set.

Remark 1.2. Notice that if a set of vectors is linearly dependent, this means that one or more
such vectors can be expressed as a linear combination of the remaining vectors. On the other hand
if the set is linearly independent this is not possible.

Remark 1.3. Notice, further, that if a set of n -dimensional (nonnull) vectors a(i) ∈ F, i =
0
1, 2, . . . , k, are mutually orthogonal, i.e. for any i 6= j a(i) a(j) = 0 then they are linearly
independent. The proof of this is quite straightforward.
Suppose not; then there exist constants ci ∈ F, not all of which are zero such that
k
X
0= ci a(i) .
i=1

Pre-multiply sequentially by a(s) to obtain


0
0 = cs a(s) a(s) , s = 1, 2, . . . , k.
0
Since for all s, a(s) a(s) > 0 we have a contradiction.

1.3 Vector Spaces


First we give a formal definition and then apply it to the preceding discussion.

Definition 1.5. A nonempty collection of elements V is said to be a linear space (or a vector space,
or a linear vector space) over the set (of real or complex numbers) F if and only if there exist two
functions, +, called vector addition, and ·, called scalar multiplication, such that the following
conditions hold for all x, y, z ∈ V and c, d ∈ F:

i. x + y = y + x, x + y ∈ V;

ii. (x + y) + z = x + (y + z);

iii. there exists a unique zero element in V denoted by 0, and termed the zero vector, such that
for all x ∈ V,
x + 0 = x;

iv. scalar multiplication is distributive over vector addition, i.e. for all x, y ∈ V and c, d ∈ F,

c · (x + y) = c · x + c · y, (c + d) · x = c · x + d · x, and c · x ∈ V;

v. scalar multiplication is associative, i.e. for all c, d ∈ F and x ∈ V,

(cd) · x = c · (d · x);
1.3. VECTOR SPACES 11

vi. for the zero and unit elements of F, we have, for all x ∈ V,

0 · x = 0 (the zero vector of iii), 1 · x = x.

The elements of V are often referred to as vectors.

Remark 1.4. The notation ·, indicating scalar multiplication, is often suppressed, and one simply
writes c(x + y) = cx + cy, the context making clear that c is a scalar and x, y are vectors.

Example 1.1. Let V be the collection of ordered n -tuplets with elements in F considered above.
The reader may readily verify that over the set F such n -tuplets satisfy conditions i through
vi of Definition 1.5. Hence, they constitute a linear vector space. If F = R, where R is the
collection of real numbers, the resulting n -dimensional vector space is denoted by Rn . Thus, if
0
a = ( a1 a2 a3 . . . , an ) ,

we may use the notation a ∈ Rn , to denote the fact that a is an element of the n -dimensional
Euclidean (vector) space. The concept, however, is much wider than is indicated by this simple
representation.

1.3.1 Basis of a Vector Space


Definition 1.6. (Span of a vector space) Let Vn denote a generic n -dimensional vector space over
F, and suppose
a(i) ∈ Vn , i = 1, 2, . . . , m, m ≥ n.

If any vector in Vn , say b, can be written as


m
X
b= ci a(i) , ci ∈ F,
i=1

we say that the set {a(i) : i = 1, 2, . . . , m} spans the vector space Sn .

Definition 1.7. A basis for a vector space Vn is a span of the space with minimal dimension, i.e.
a minimal set of linearly independent vectors that span Vn .

Example 1.2. For the vector space Vn = Rn above, it is evident that the set

{e·i : i = 1, 2, . . . , n}

forms a basis, where e·i is an n -dimensional (column) vector all of whose elements are zero save
the i th, which is unity. Such vectors are typically called unit vectors. Notice further that this
is an orthonormal set in the sense that such vectors are mutually orthogonal and their length is
unity.
12 CHAPTER 1. VECTORS AND VECTOR SPACES

Remark 1.5. It is clear that if Vn is a vector space and

A = {a(i) : a(i) ∈ Vn i = 1, 2, . . . , m, m ≥ n}

is a subset that spans Vn then there exists a subset of A that forms a basis for Vn . Moreover,
if {a(i) : i = 1, 2, . . . , k, k < m} is a linearly independent subset of A we can choose a basis that
contains it. This is done by noting that since A spans Vn then, if it is linearly independent, it is a
basis and we have the result. If it is not, then we simply eliminate some of its vectors that can be
expressed as linear combinations of the remaining vectors. Because the remaining subset is linearly
independent, it can be made part of the basis.
A basis is not unique, but all bases for a given vector space contain the same number of vectors.
This number is called the dimension of the vector space Vn and is denoted by

dim(Vn ).

Suppose dim(Vn ) = n. Then, it may be shown that any n + i vectors in Vn are linearly depen-
dent for i ≥ 1, and that no set containing less than n vectors can span Vn .

1.4 Subspaces of a Vector Space


Let Vn be a vector space and Pn a subset of Vn in the sense that b ∈ Pn implies that b ∈ Vn .
If Pn is also a vector space, then it is said to be a subspace of Vn , and all discussion regarding
spanning, basis sets, and dimension applies to Pn as well.
Finally, notice that if {a(i) : i = 1, 2, . . . , m} is a basis for a vector space Vn , every vector in Vn ,
say b, is uniquely expressible in terms of this basis. Thus, suppose we have two representations, say
m m
(1) (2)
X X
b= bi a(i) = bi a(i) ,
i=1 i=1

(1) (2)
where bi , bi , i = 1, 2, . . . , m are appropriate sets of scalars. This implies
m
(1) (2) 
X
0= bi − bi a(i) .
i=1

But a basis is a linearly independent set; hence, we conclude


(1) (2)
bi = bi , i = 1, 2, . . . , m,

which shows uniqueness of representation. Of course, for any set of vectors to serve as a basis for
Vn we must have m = n .

Example 1.3. In the next chapter, we introduce matrices more formally. For the moment, let us
deal with the rectangular array  
a11 a12
A= ,
a21 a22
1.4. SUBSPACES OF A VECTOR SPACE 13

with elements aij ∈ F, which we shall call a matrix. If we agree to look upon this matrix as the
vector 2  
a11
a 
 21 
a= ,
 a12 
a22
we may consider the matrix A to be an element of the vector space R4 , for the case where F = R.
Evidently, the collection of unit vectors
0 0 0 0
e·1 = (1, 0, 0, 0) , e·2 = (0, 1, 0, 0) , e·3 = (0, 0, 1, 0) , e·4 = (0, 0, 0, 1)

is a basis for this space because for arbitrary aij we can always write

a = a11 e·1 + a21 e·2 + a12 e·3 + a22 e·4 ,

which is equivalent to the display of A above.


Now, what if we were to specify that, in the matrix above, we must always have a12 = a21 ? Any
such matrix is still representable by the 4-dimensional vector a, except that now the elements of a
have to satisfy the condition a12 = a21 , i.e. the second and third elements must be the same.
Thus, a satisfies a ∈ R4 , with the additional restriction that its third and second elements are
identical, and it is clear that this must be a subset of R4 . Is this subset a subspace? Clearly, if a, b
satisfy the condition that their second and third elements are the same, the same is true of a + b
as well as c · a, for any c ∈ R.
What is the basis of this subspace? A little reflection will show that it is
0 0 0
e·1 = (1, 0, 0, 0) , e·4 = (0, 0, 0, 1) , e∗ = (0, 1, 1, 0) .

These three vectors are mutually orthogonal, but not orthonormal; moreover, if A is the special
matrix  
a11 α
A= ,
α a22
0
the corresponding vector is a = (a11 , α, α, a22 ) and we have the unique representation

a = a11 e·1 + αe∗ + a22 e·4 .

Because the basis for this vector space has three elements, the dimension of the space is three. Thus,
these special matrices constitute a 3-dimensional subspace of R4 .

2 This is an instance of the vectorization of a matrix, a topic we shall discuss at length at a later chapter.
14 CHAPTER 1. VECTORS AND VECTOR SPACES

PROBLEMS

1. Let z be a complex number in standard form, i.e. z = x + iy ; find its inverse in standard form i.e.
1/z = x∗ + iy ∗ and verify this is exactly what you would obtain using he polar form representation.

2. let zs = xs + iys , s = 1, 2, 3 . Express z3 in standard form, where z3 = (z1 /z2 ) and verify this
is exactly what you would obtain using he polar form representation.

3. Let    
a11 a12 b11 b12
A= ,B = .
a21 a22 b21 n22
Show that there exists a matrix  
c11 c12
C= ,
c21 c22
such that B = CA .

Das könnte Ihnen auch gefallen