Beruflich Dokumente
Kultur Dokumente
Fourth Edition
1
Phoebus J. Dhrymes
May 2012
1
Phoebus
c J. Dhrymes, 2012. Preliminary material; not to be cited or disseminated without the author’s
permission. Comments and constructive suggestions are welcomed. email:pjd1@columbia.edu
2
Contents
3
4 CONTENTS
Chapter 1
In nearly all of the discussion in this volume, we deal with the set of real numbers. Occasionally,
however, we deal with complex numbers as well. In order to avoid cumbersome repetition, we shall
denote the set we are dealing with by F and let the context elucidate whether we are speaking of
real or complex numbers, or both.
z = x + iy,
i2 = −1. (1.1)
All other properties of the entity denoted by i are derivable from the basic definition in Eq. (1.1).
For example,
i4 = (i2 )(i2 ) = (−1)(−1) = 1.
Similarly,
i3 = (i2 )(i) = (−1)i = −i,
and so on.
It is important for the reader to grasp, and bear in mind, that a complex number is describable
in terms of an ordered pair of real numbers.
Let
zj = xj + iyj , j = 1, 2,
5
6 CHAPTER 1. VECTORS AND VECTOR SPACES
if and only if
x1 = x2 and y1 = y2 .
z1 z2 = (x1 x2 − y1 y2 ) + i(x1 y2 + x2 y1 ).
Addition and multiplication are, evidently, associative and commutative; i.e. for complex zj ,
j =1,2,3
z1 + z2 + z3 = (z1 + z2 ) + z3 and z1 z2 z3 = (z1 z2 )z3 ,
z1 + z2 = z2 + z1 and z1 z2 = z2 z1 .
and so on.
The conjugate of a complex number z is denoted by z̄ and is defined by
z̄ = x − iy.
Associated with each complex number is its modulus or length or absolute value, which is a
real number often denoted by |z| and defined by
For the purpose of carrying out multiplication and division (an operation which we have not, as yet,
defined) of complex numbers, it is convenient to express them in polar form.
Figure 1.1:
and thus write the complex number in the standard polar form
z1 = r1 eiθ1 . (1.4)
In the representation above, r1 is the modulus and θ1 the argument of the complex number
z1 . It may be shown that the quantity eiθ1 as defined in Eq. (1.3) has all the properties of real
exponentials insofar as the operations of multiplication and division are concerned. If we confine the
argument of a complex number to the range [0, 2π), we have a unique correspondence between
the (x, y) coordinates of a complex number and the modulus and argument needed to specify its
polar form. Thus, for any complex number z, the representations
z = x + iy, z = reiθ ,
where
x y
r = (x2 + y 2 )1/2 , cos θ = , sin θ = ,
r r
are completely equivalent.
In polar form, multiplication and division of complex numbers are extremely simple operations.
Thus,
provided z2 6= 0 .
We may extend our discussion to complex vectors, i.e. ordered n -tuples of complex numbers.
Thus
z = x + iy
is a complex vector, where x and y are n -element (real) vectors (a concept to be defined imme-
diately below). As in the scalar case, two complex vectors z1 , z2 are equal if and only if
x1 = x2 , y 1 = y2 ,
where now xi , yi , i = 1, 2, are n -element (column) vectors. The complex conjugate of the vector
z is given by
z̄ = x − iy,
the quantities x0 x, y 0 y being ordinary scalar products of two vectors. Addition and multiplication
of complex vectors are defined by
z1 + z2 = (x1 + x2 ) + i(y1 + y2 ),
z10 z2 = (x01 x2 − y10 y2 ) + i(y10 x2 + x01 y2 ),
z1 z20 = (x1 x02 − y1 y20 ) + i(y1 x02 + x1 y20 ),
where xi , yi , i = 1, 2, are real n -element column vectors. The notation for example x01 , or y20
means that the vectors are written in row form, rather than the customary column form. Thus,
x1 x02 is a matrix, while x01 x2 is a scalar. These concepts (vector, matrix) will be elucidated below.
It is somewhat awkward to introduce them now; still, it is best to set forth at the beginning what
we need regarding complex numbers.
1.2 Vectors
Definition 1.1. Let1 ai ∈ F, i = 1, 2, . . . , n ; then the ordered n -tuple
a1
a2
a = ..
.
an
Remark 1.1. Notice that a scalar is a trivial case of a vector whose dimension is n = 1 .
Customarily we write vectors as columns, so strictly speaking we should use the term column
vectors. But this is cumbersome and will not be used unless required for clarity.
a ∈ F.
Definition 1.3. If a, b ∈ F are n -dimensional column vectors, they are said to be orthogonal if
0 0 0
and only if a b = 0. If, in addition, a a = b b = 1, they are said to be orthonormal.
implies that
ci = 0, i = 1, 2, . . . , k,
the vectors {a(i) : i = 1, 2, . . . , k} are said to be linearly independent or to constitute a lin-
early independent set. If there exist scalars ci , i = 1, 2, . . . , k, not all of which are zero, such
10 CHAPTER 1. VECTORS AND VECTOR SPACES
Pk
that i=1 ci a(i) = 0, the vectors {a(i) : i = 1, 2, . . . , k} are said to be linearly dependent or to
constitute a linearly dependent set.
Remark 1.2. Notice that if a set of vectors is linearly dependent, this means that one or more
such vectors can be expressed as a linear combination of the remaining vectors. On the other hand
if the set is linearly independent this is not possible.
Remark 1.3. Notice, further, that if a set of n -dimensional (nonnull) vectors a(i) ∈ F, i =
0
1, 2, . . . , k, are mutually orthogonal, i.e. for any i 6= j a(i) a(j) = 0 then they are linearly
independent. The proof of this is quite straightforward.
Suppose not; then there exist constants ci ∈ F, not all of which are zero such that
k
X
0= ci a(i) .
i=1
Definition 1.5. A nonempty collection of elements V is said to be a linear space (or a vector space,
or a linear vector space) over the set (of real or complex numbers) F if and only if there exist two
functions, +, called vector addition, and ·, called scalar multiplication, such that the following
conditions hold for all x, y, z ∈ V and c, d ∈ F:
i. x + y = y + x, x + y ∈ V;
ii. (x + y) + z = x + (y + z);
iii. there exists a unique zero element in V denoted by 0, and termed the zero vector, such that
for all x ∈ V,
x + 0 = x;
iv. scalar multiplication is distributive over vector addition, i.e. for all x, y ∈ V and c, d ∈ F,
c · (x + y) = c · x + c · y, (c + d) · x = c · x + d · x, and c · x ∈ V;
(cd) · x = c · (d · x);
1.3. VECTOR SPACES 11
vi. for the zero and unit elements of F, we have, for all x ∈ V,
Remark 1.4. The notation ·, indicating scalar multiplication, is often suppressed, and one simply
writes c(x + y) = cx + cy, the context making clear that c is a scalar and x, y are vectors.
Example 1.1. Let V be the collection of ordered n -tuplets with elements in F considered above.
The reader may readily verify that over the set F such n -tuplets satisfy conditions i through
vi of Definition 1.5. Hence, they constitute a linear vector space. If F = R, where R is the
collection of real numbers, the resulting n -dimensional vector space is denoted by Rn . Thus, if
0
a = ( a1 a2 a3 . . . , an ) ,
we may use the notation a ∈ Rn , to denote the fact that a is an element of the n -dimensional
Euclidean (vector) space. The concept, however, is much wider than is indicated by this simple
representation.
Definition 1.7. A basis for a vector space Vn is a span of the space with minimal dimension, i.e.
a minimal set of linearly independent vectors that span Vn .
Example 1.2. For the vector space Vn = Rn above, it is evident that the set
{e·i : i = 1, 2, . . . , n}
forms a basis, where e·i is an n -dimensional (column) vector all of whose elements are zero save
the i th, which is unity. Such vectors are typically called unit vectors. Notice further that this
is an orthonormal set in the sense that such vectors are mutually orthogonal and their length is
unity.
12 CHAPTER 1. VECTORS AND VECTOR SPACES
A = {a(i) : a(i) ∈ Vn i = 1, 2, . . . , m, m ≥ n}
is a subset that spans Vn then there exists a subset of A that forms a basis for Vn . Moreover,
if {a(i) : i = 1, 2, . . . , k, k < m} is a linearly independent subset of A we can choose a basis that
contains it. This is done by noting that since A spans Vn then, if it is linearly independent, it is a
basis and we have the result. If it is not, then we simply eliminate some of its vectors that can be
expressed as linear combinations of the remaining vectors. Because the remaining subset is linearly
independent, it can be made part of the basis.
A basis is not unique, but all bases for a given vector space contain the same number of vectors.
This number is called the dimension of the vector space Vn and is denoted by
dim(Vn ).
Suppose dim(Vn ) = n. Then, it may be shown that any n + i vectors in Vn are linearly depen-
dent for i ≥ 1, and that no set containing less than n vectors can span Vn .
(1) (2)
where bi , bi , i = 1, 2, . . . , m are appropriate sets of scalars. This implies
m
(1) (2)
X
0= bi − bi a(i) .
i=1
which shows uniqueness of representation. Of course, for any set of vectors to serve as a basis for
Vn we must have m = n .
Example 1.3. In the next chapter, we introduce matrices more formally. For the moment, let us
deal with the rectangular array
a11 a12
A= ,
a21 a22
1.4. SUBSPACES OF A VECTOR SPACE 13
with elements aij ∈ F, which we shall call a matrix. If we agree to look upon this matrix as the
vector 2
a11
a
21
a= ,
a12
a22
we may consider the matrix A to be an element of the vector space R4 , for the case where F = R.
Evidently, the collection of unit vectors
0 0 0 0
e·1 = (1, 0, 0, 0) , e·2 = (0, 1, 0, 0) , e·3 = (0, 0, 1, 0) , e·4 = (0, 0, 0, 1)
is a basis for this space because for arbitrary aij we can always write
These three vectors are mutually orthogonal, but not orthonormal; moreover, if A is the special
matrix
a11 α
A= ,
α a22
0
the corresponding vector is a = (a11 , α, α, a22 ) and we have the unique representation
Because the basis for this vector space has three elements, the dimension of the space is three. Thus,
these special matrices constitute a 3-dimensional subspace of R4 .
2 This is an instance of the vectorization of a matrix, a topic we shall discuss at length at a later chapter.
14 CHAPTER 1. VECTORS AND VECTOR SPACES
PROBLEMS
1. Let z be a complex number in standard form, i.e. z = x + iy ; find its inverse in standard form i.e.
1/z = x∗ + iy ∗ and verify this is exactly what you would obtain using he polar form representation.
2. let zs = xs + iys , s = 1, 2, 3 . Express z3 in standard form, where z3 = (z1 /z2 ) and verify this
is exactly what you would obtain using he polar form representation.
3. Let
a11 a12 b11 b12
A= ,B = .
a21 a22 b21 n22
Show that there exists a matrix
c11 c12
C= ,
c21 c22
such that B = CA .