Sie sind auf Seite 1von 5

1

Applied Linear Algebra

by S. K. Cho and P. D. Ion

Panglossian Publisher, 4776 Carter Place, Ypsilanti, Michigan 48197

(734) 434-8225

This book is linear algebra applied to engineering and physics problems. Since tensors, exterior forms as well as differential and integral equations are increasingly employed in modern engineering and physical problems, and they invariably come down to linear algebraic ones at the stage of numerical computation(there is little exception to this), we treat them as part of linear algebra. There is also a matter of physical dimension that cannot be avoided. While mathematics is founded on numbers, and the notion of physical dimension may be a nemesis to mathematicians, it is natural to physicists and engineers, and we observe them in this book.

I will focus on two algebraic structures built on equations: (1) the one finite-dimensional algebraic equations, (2) the other infinite-dimensional Fourier algebraic equations, which is traditionally and incorrectly interpreted as Fourier series expansion. The former(finite-dimensional case), the ill- posed system, is concerned with the change of bases, specifically the principal-axis bases, while the later(infinite-dimensional case), the well-posed system, relies on the biorthonormal relationship of an infinite set of pure sinusoidal functions, which perform what the principal-axis bases do, thereby dispensing with the need for the change of bases.

(1). I discovered that as a fundamental rule, for solution of a finite-dimensional system of equations, it needs its dual system, both of which then come entangled , forming an irreducible whole. The Hilbert’s algebraic alternative relies, significantly, on its dual system for establishing the compatibility of the equation, but not on the solvability and solution of the equation. I assert that the solution of the given system of equations cannot be found without the systemic participation of the dual system, and the compatibility condition is automatically met. Thus, for each given system of equations, it is necessary for its dual to be induced. Following consequences occur from this entanglement:

The matrix in general does not map directly from the domain to its codomain(which is pos- sible only for a specialized case in which an exacting condition is met), but instead via the fundamental route ,

furthermore, there is no room in the entangled systems for the eigenequation, but only for a pseudo eigenequation: The familiar eigenequation for a matrix is fundamentally subsumed to the pseudo eigenequation in the entangled systems.

The algebraic structure I will build here is for constructing a pseudo principal-axis basis in the system and a dual pseudo principal-axis basis in the dual system.

Any given m × n-dimensional ill-posed system of equations Ax = b must induces its n × m-

dimensional dual system of equations A b = x˜ for solution; they are entangled in the sense that one

cannot exist without the other. These entangled systems are linked together by a linear functional and a dual linear functional resulting in isomorphic twins. The entanglement leads automatically to a pseudo and a dual pseudo eigenequations.

˜

As shown in Figure 3.5, the domain space V in the given system is mapped to the dual codomain

space V by τ τ , a selfadjoint operator. That is, this operator maps a vector in V to itself in V , which

are isomorphic twins. In other words, this operator yields a selfadjoint eigenequation! Similarly, the

˜

˜

˜

mapping from the dual domain space W over to the domain space W by another selfadjoint operator

2

ττ leads to another selfadjoint eigenequation. Thus, the entangled systems in Figure 3.5 give rise to a pair of selfadjoint eigenequations.

In the eigenspaces of these selfadjoint matrices, the rectangular matrix A(τ ) maps eigenvectors of the selfadjoint matrix A A(A = τ ) residing in the domain space V to the codomain space W where the dual eigenvectors of the selfadjoint matrix AA reside. These pseudo eigenvectors u i are born, however, in the r-dimensional pseudo eigenspace Γ r V of the matrix A. Similarly, the

˜

rectangular matrix A maps the dual eigenvectors of AA in the dual domain space W to the dual

codomain space V where the eigenvectors of A A reside, and these dual pseudo eigenvectors v˜ i are

of A . We call such mappings

a pseudo eigenequation of the rectangular matrix A and a dual pseudo eigenequation of of the

born in the r-dimensional proxy dual pseudo eigenspace

˜

˜

Φ r

W

rectangular matrix A , respectively.

˜

A A : u i V −→ u i V ,

˜

AA : v˜ i W −→ v˜ i W,

A : u i V

=

˜

−→ v˜ i W,

˜

A : v˜ i W −→ u i V .

We see that the pseudo eigenvectors {u i } of A and the dual pseudo eigenvectors {v˜ i } of A are, simultaneously, the eigenvectors and dual eigenvectors of the following selfadjoint matrices A A and AA :

Au i = λ i v˜ i , A v˜ i = µ˜ i u

i

⇐⇒

A Au i = ν i u i , AA v˜ i = ν i v˜ i .

i, j r < (n, m).

Here r denotes the rank of A(hence of A , A A and AA ); i.e., the number of non-zero eigenvalues of the selfadjoint matrices. These pseudo and dual pseudo eigenequations show that the given system and its dual system cannot exist without the other; namely, they are ineluctably united in an irreducible single structure. This irreducibility is fundamental in the sense that it holds in every system of linear algebraic equations. David Hilbert was the first who detected an iceberg piece of this structure peaking out in his algebraic alternative, and Cornelius Lanczos explored it further, finding more bits and pieces, but did not find the whole structure. The equivalence mentioned above can be seen by multiplying A to the pseudo eigenequation, and A to the dual pseudo eigenequation, and also A and A to the first and the second of the selfadjoint eigenequations to return back to the pseudo and the dual pseudo eigenequations. We point out that the selfadjoint matrices such as A A and AA we often see in linear algebra actually can exist only in the entangled system of equations and its dual system, and acrobatic psuedo and dual pseudo eigenequations crossing over the system and its dual system do not exist anywhere else in linear algebra.

The pseudo eigenvectors {u i } and dual pseudo eigenenvectors {v˜ i } form respectively an r × n-

and r × m-dimensional semi-unitary matrices

˜

U r and V r as

U

˜

V

U r =

I n ,

r

˜

V r = I m ,

U r U

˜

V

V

r

˜

r

r

=

=

I r ,

I r .

Now, the pseudo and dual pseudo eigenequations in matrix form are

˜

AU r = V r Λ r ,

A V r = U r M r ,

˜

˜

3

˜

M r are r-dimensional diagonal matrices with λ ii and µ˜ ii on the main diagonals(the

double-indices on them are because they are elements of matrices). Notice that the composite

selfadjoint eigenequations of A A and AA are not possible without the pseudo and dual pseudo

and using

the semi-unitarity, there result

eigenequations of the rectangular matrices A and A . Mulitplying them with V

where Λ r and

˜

r

and U

r

˜

V

r

r

U

AU r = Λ r ,

A V r = M r .

˜

˜

The rectangular matrices A and A are diagonalized in the r-dimensional subspaces. We call these fundamental spectral decompositions of A and A because they hold for every matrix, ill-posed as well as well-posed(the full-ranked case). These diagonalizations reveal for the first time that the matrix A and its dual A are activated from the beginning only in the r-dimensional subspaces(the n-dimensional spaces in the full-ranked cases), not the whole n- and m-dimensional spaces. Back- tracking these decompositions leads to natural inverse of A (dual natural inverse of A ), the termi- nology coined by Cornelius Lanczos, a Hungarian mathematician(cf. Linear Differential Operators, C. Lanczos, Dover).

Multiplying the pseuo eigenequation with U

r

and the dual pseudo eigenequation with

˜

V

r

from

the right, we obtain the fundamental decomposition of A and A as

A = V

r

A = U r

Λ r U r ,

˜

V

r

˜

M r

.

The route of the fundamental decomposition (the dual fundamental decomposition) is not a direct one from the domain to the codomain(the dual domain to the dual codomain) as traditionally interpreted, but roundabout “taxi-driver’s ” routes as the fundamental rule. It is revealed that both the fundamental decomposition of the matrix A and its diagonalization are not possible without the systemic participation of the dual system. Hence the entanglement of the dual system.

In the n-dimensional domain V of A resides an r-dimensional pseudo eigenspace Γ r of A, and

Φ r .

Φ r as

x = U r x r , b = V r b r . Substituting these into the equation Ax = b, we obtain the solution we sought:

Then, the n- and m-dimensional vectors x V and b W are related to x r Γ r and b

likewise in the m-dimensional codomain W of A an r-dimensional dual pseudo eigenspace

˜

˜

˜

˜

x

i =

b

i

λ i ,

i = 1, 2, · · · , r.

This corresponds to the least squares solution, but is an r-dimensional vector solution. Outside the

˜

matrix Λ r , the double indexing on λ ii in Λ r is dropped. The solution can be expressed in V and W

spaces if we so wish.

(2). The Fourier series expansion of continuous or even piece-wise continuous functions has been with us since 1807. But I discovered that mathematically, it cannot be a series expansion, but instead a well-posed system of an infinite-dimensional linear algebraic equations in which the “Fourier coefficients” are components of an unknown vector in the domain space and the function to be “expanded in series” is a denumerably continuous pre-assigned non-homogeneous vector in the codomain space, which are in fact dual to each other , and the Fourier matrix maps the domain to its codomain in one-to-one. The algebraic structure leading to the solution here is distinct and unique: The inversion of this infinite-dimensional algebraic system of equations does not rely on the

4

principal-axis bases, but instead on the biorthonormality of the entries of the Fourier matrix, namely an infinite set of sinusoidal scalar kernel functions, and their corresponding members of the adjoint Fourier matrix. The biorthonormality property of these sinusoidal functions result in destructive interferences for all entries and leave only the diagonal entries to interfere constructively leading to Dirichlet kernels which have the limit values of delta functions. As a result, the inner product between the adjoint Fourier matrix and the Fourier matrix yield a diagonal matrix whose diagonal entries are Dirichlet kernels, which have the delta functions as their limits. After integration of these delta functions, the infinite-dimensional matrix turns into an identity matrix, thereby ensuring the unique solution of the infinite-dimensional algebraic equations. In this sense, this Fourier algebraic problem is unique, dispensing with the need for the change of bases. The domain and the codomain spaces are are dual to each other . We repeat that there is no change of bases involved for inversion of the infinite-dimensional system of equations.

Let us consider an n-dimensional equation for illustration. To begin with, the finite intervals in the domain and the codomain are made of denumerable rational numbers. In contrast, the real numbers are not denumerable as in the case of the Fourier integral transformation. Thus, the Fourier’s works are composed of two different kinds of maps defined on two different sets of numbers: One is on the finite interval of denumerable rational numbers for the system of linear algebraic map for continuous functions, the other is on an infinite interval of non-denumerable real numbers for the integral map for square-integral functions.

First, a given continuous function over a finite interval [0, l]. The unknown vector c(k) is de- fined over a denumerable finite interval [k 1 , k n ], k m = 2πm/l as {c j (k j )} in the domain space. The problem is to find this unknown vector in the domain space, hence an inverse problem or an infinite- dimensional algebraic equations. Let the variable x in the codomain be a distance vector(with a physical dimension of length). Then, the variable k in the domain must represent a wavenumber vector(with a physical dimension of inverse length) if xk is to be used as the dimensionless indepen- dent variable of sinusoidal functions in the matrix as Fourier proposed. In other words, xk must be a linear functional.

At each point x i in the codomain, the ith equation is given by

f i (x i ) =

1

l

n

j=1

σ i,j (x i |k j )c j (k j ), σ i,j = e i(x i k j ) .

Here (σ i,j (x i |k j )) is the n × n-dimensional Fourier matrix , which maps the domain in one-to-one into the codomain on the strength of the biorthonormality of the scalar kernels of the Fourier matrix with those of its adjoint Fourier matrix, which will be defined shortly. This biorthonormality was proved by Lejeune Dirichlet, a German mathematician. The matrix form of this algebraic equation is

f 1 (x 1 )

f 2 (x 2 )

.

.

.

f n (x n )

=

1

l

e ix 1 k 1 e ix 2 k 1

.

.

.

e ix n k 1

e ix 1 k 2 e ix 2 k 2

e ix n k 2

···

···

···

e ix 1 k n e ix 2 k n

e ix n k n

c 1 (k 1 ) c 2 (k 2 )

.

.

.

c n (k n )

.

This is essentially what Fourier did some two hundred years ago, but was only misled to declare it as an expansion and every ones accepted blindly the Fourier’s declaration until now. The so-called Fourier coefficients in the expansion theory is the solution of the above equation (the vector in the

5

column matrix on the right side), which takes the form

c j (k j ) = l dxf (x)e i<k j |x i > ,

0

j = 1, 2, ··· , n

where the denumerably continuous points on which f (x) is defined is expressed at denumerably con- tinuous points so that the summation sign may be replaced with an integral sign, and e i<k j |x i > are the elements of the adjoint Fourier matrix , which is obtained by interchanging the rows and columns of the Fourier matrix and complex-conjugationog the entries. This comes about by multiplying the both sides of the given equation withe adjoint Fourier matrix and replacing the summation sign with an integral sign, relying on the continuous denumerability of the interval [0, l] on which f (x) is defined. As Dirichlet established, each sinusoidal function e i(x i k j ) and its adjoint e i<k i |x j > are biorthonormal yielding a Dirac delta function, which uponintegration turns into a unity. Thus, the right side becomes c(k), the solution we seek. f (x) under the integral sign is a denumerably contin- uous function given by an n-dimensional algebraic equations in which the Fourier matrix maps the unknown vector {c j (k j ))} to it.

We now see that Fourier proposed, without realizing it, a system of linear algebraic equations using biorthonormal sinusoidal functions, and offered its solution at the same time! The objection which Fourier’s contemporary mathematicians raised in 1807 melts away as the mapping from the domain space to its codomain space. While Fourier could not explain to his contemporary mathe- maticians how his proposed method of expansion worked, Lejeune Dirichlet supplied an answer to that question some more than ten years later.

Incidentally, the entries of the Fourier matrix, which is unique in the realm of the mathematical sciences, may be regarded as made of infinite numbers of eigensolutions of the Sturm-Liouville homogeneous boundary value problem or an interior resonance problem. We are awed that the mathematical structure which Fourier discovered in 1807 describes countless numbers of natural phenomema as well as man-made information technology in one fell sweep via the single notion of the constructive-destructive interference of pure sinusoidal waves, which even extends deeply into the mysterious microscopic quantum world where elementary particles are waves at the same time, thereby bringing in the unavoidable notions of the Heisenberg’s uncertainty principle and the Schr¨odinger’s entanglement, Schr¨onkung.