(Vol. 115': Lecture Notes in Operations Research and Mathematical Economics, Vol. 1659: Lecture
Notes in Operations Research and Mathematical Systems)
Vol. 1: H. BOhlmann, H. Loeffel, E. Nievergelt, EinlOhrung in die Vol. 30: H, Noltemeier, Sensitivitatsanalyse bei diskreten linearen
Theorie und Praxis der Entscheidung bei Unsicherheit. 2, Aullage, Optimierungsproblemen, VI, 102 Seiten, 1970.
IV, 125 Seiten, 1969. Vol. 31: M, KOhlmeyer, Die nichtzentrale tVerteilung. II, 106 Sei
Vol. 2: U. N. Bhat, A Study 01 the Queueing Systems M/G/l and ten, 1970,
GI/MI1. VIII, 78 pages. 1968, Vol. 32: F, Bartholomes und G, Hotz, Homomorphismen und Re
Vol, 3: A. Strauss, An Introduction to Qptimal Control Theory. duktionen linearer Sprachen, XII, 143 Seiten, 1970. OM 18,
Out 01 print Vol. 33: K. Hinderer, Foundations 01 Nonstationary Dynamic Pro
Vol. 4: Branch and Bound: Eine EinfOhrung. 2., geanderte Aullage, gramming with Discrete Time Parameter, VI, 160 pages, 1970,
Herausgegeben von F. Weinberg, VII, 174 Seiten, 1973,
Vol. 34: H, Stormer, SemiMarkollProzesse mit endlich vielen
Vol. 5: L. P. Hyvarinen, Inlormation Theory lor Systems Engineers. Zustiinden, Theorie und Anwendungen, VII, 128 Seiten, 1970.
VII, 205 pages. 1968.
Vol. 35: F. Ferschl, Markovkellen. VI, 168 Seiten. 1970.
Vol. 6: H, p, KOnzi, 0, MOiler, E. Nievergelt, EinfOhrungskursus in
die dynamische Programmierung. IV, 103 Seiten, 1968, Vol, 36: M. J. P. Magill, On a General Economic Theory of Motion,
VI, 95 pages. 1970.
Vol. 7: W, Popp, EinfOhrung in die Theorie der Lagerhaltung, VI,
173 Seiten, 1968, Vol. 37: H. MOlierMerbach, On RoundOff Errors in Linear Pro
gramming. V, 48 pages, 1970.
Vol. 8: J. Teghem, J. LorisTeghem, J, P. Lambolle, Modeles
d'Allente MIGll et GI/MI1 il Arrivees et Services en Groupes. III, Vol, 38: Statistische Methoden I. Herausgegeben von E. Walter.
53 pages, 1969. VIII, 338 Seiten, 1970,
Vol. 9: E. Schultze, EinfOhrung in die mathematischen Grundlagen Vol. 39: Statistische Methoden II. Herausgegeben von E. Walter,
der Inlormationstheorie. VI, 116 Seiten. 1969, IV, 157 Seiten, 1970,
Vol. 10: D. Hochstadter, Stochastische Lagerhaltungsmodelle. VI, Vol. 40: H, Drygas, The CoordinateFree Approach to Gauss
269 Seiten. 1969.. Markov Estimation, VIII, 113 pages. 1970.
Vol. 11/12: Mathematical Systems Theory and Economics. Edited Vol. 41 : U, Ueing, Zwei Losungsmethoden fOr nichtkonvexe Pro
by H. W. Kuhn and G. P. Szego. VIII, III, 486 pages. 1969, grammierungsprobleme, IV, 92 Seiten. 1971,
Vol. 13: Heuristische Planungsmethoden, Herausgegeben von Vol. 42: A. V. Balakrishnan, Introduction to Optimization Theory in
F, Weinberg und C, A. Zehnder. II, 93 Seiten. 1969, a Hilbert Space, IV, 153 pages, 1971,
Vol. 14: Computing Methods in Optimization Problems. V, 191 pages, Vol, 43: J. A. Morales, Bayesian Full Information Structural Analy'
1969, sis, VI, 154 pages, 1971,
Vol. 15: Economic Models, Estimation and Risk Programming: Vol. 44, G, Feichtinger, Stochastische Modelle demographischer
Essays in Honor 01 Gerhard Tintner. Edited by K. A, Fox, G, V, L. Prozesse. IX, 404 Seiten, 1971,
Narasimham and J. K. Sengupta. VIII, 461 pages, 1969.
Vol. 45: K. Wendler, Hauptaustauschschrille (Principal Pivoting).
Vol, 16: H. P. KOnzi und W. Oellli, Nichtlineare Optimierung: 11,64 Seiten. 1971,
Neuere Verfahren, Bibliographie, IV, 180 Seiten. 1969.
Vol. 46: C, Boucher, Lec;ons sur la theorie des automates ma
Vol, 17: H. Bauer und K. Neumann, Berechnung optimaler Steue thematiques. VIII, 193 pages, 1971.
rungen, Maximumprinzip und dynamische Optimierung. VIII, 188
Vol. 47: H. A. Nour Eldin, Optimierung linearer Regelsysteme
Seiten, 1969,
mit quadrati scher Zielfunktion. VIII, 163 Seiten. 1971,
Vol. 18: M. Wolff, Optimale Instandhaltungspolitiken in einlachen
Systemen, V, 143 Seiten, 1970. Vol. 48: M. Constam, FORTRAN lOr Anfanger. 2, Auflage. VI,
148 Seiten. 1973.
Vol. 19: L. p, Hyvlirinen, Mathematical Modeling lor Industrial Pro
Vol. 49: Ch, SchneeweiB, Regelungstechnische stochastische
cesses. VI, 122 pages. 1970.
Optimierungsverfahren, XI, 254 Seiten, 1971.
Vol. 20: G, Uebe, Optimale Fahrplane, IX, 161 Seiten, 1970,
Vol, 50: Unternehmensforschung Heute  Obersichtsvortrage der
Vol. 21: Th. M, Liebling, Graphentheorie in Planungs und Touren ZOricher Tagung von SVOR und DGU, September 1970, Heraus
problemen am Beispiel des stadtischen StraBendienstes, IX, gegeben von M, Beckmann, IV, 133 Seiten, 1971.
118 Seiten. 1970,
Vol. 51: Digitale Simulation, Herausgegeben von K. Bauknecht
Vol. 22: W. Eichhorn, Theorie der homogenen Produktionslunk
und W, Nel. IV, 207 Seiten, 1971.
tion. VIII, 119 Seiten. 1970,
Vol. 52: Invariant Imbedding. Proceedings 1970. Edited by R. E.
Vol. 23: A. Ghosal, Some Aspects 01 Queueing and Storage
Bellman and E. D. Denman. IV, 148 pages. 1971.
Systems, IV, 93 pages. 1970,
Vol. 24: G. Feichtinger, Lernprozesse in stochastischen Automaten. Vol. 53: J. RosenmOller, Kooperative Spiele und Markte. III, 152
V, 66 Seiten. 1970. Seiten, 1971,
Vol. 25: R. Henn und O. Opitz, Konsum und Produktionstheorie I, Vol. 54: C. C, von Weizsacker, Steady State Capital Theory, III,
11,124 Seiten. 1970. 102 pages, 1971,
Vol. 26: 0, Hochstiidter und G, Uebe, Okonometrische Methoden, Vol. 55: p, A, V, B, Swamy, Statistical Inference in Random Coef
XII, 250 Seiten. 1970. ficient Regression Models. VIII, 209 pages. 1971.
Vol, 27: I. H. Mufti, Computational Methods in Optimal Control Vol. 56: Mohamed A. EIHodiri, Constrained Extrema. Introduction
Problems, IV, 45 pages. 1970. to the Differentiable Case with Economic Applications. III, 130
Vol. 28: Theoretical Approaches to NonNumerical Problem Sol pages. 1971.
ving. Edited by R. B. Banerji and M. D. Mesarovic, VI, 466 pages. Vol, 57: E, Freund, Zeitvariable MehrgrOBensysteme. VIII,160 Sei
1970, ten, 1971,
Vol. 29: S. E. Elmaghraby, Some Network Models in Management Vol. 58: p, B. Hagelschuer, Theorie der linearen Dekomposition,
Science. III, 176 pages. 1970. VII, 191 Seiten, 1971,
166
Raymond CuninghameGreen
Minimax Algebra
SpringerVerlag
Berlin Heidelberg New York 1979
Editorial Board
H. Albach' A V. Balakrishnan' M. Beckmann (Managing Editor)
P. Dhrymes . J. Green' W. Hildenbrand' W. Krelle
H. P. KOnzi (Managing Editor) . K. Ritter' R. Sato . H. Schelbert
P. SchOnfeld
Author
Prof. R. CuninghameGreen
Dept of Mathematical Statistics
University of Birmingham
P.O. Box 363
Birmingham B15 2TTIEngland
This work is subject to copyright All rights are reserved, whether the whole
or part of the material is concerned, specifically those of translation, re
printing, reuse of illustrations, broadcasting, reproduction by photocopying
machine or similar means, and storage in data banks. Under § 54 of the
German Copyright Law where copies are made for other tban private use,
a fee is payable to the publisher, the amount of the fee to be determined by
agreement with the publisher.
© by SpringerVerlag Berlin Heidelberg 1979
2142/3140543210
FOREWORD
Chapter 4 defines a class of structures called blogs, which model the system
(R, ~, 9, ~', ®') more closely, but also include a threeelement system called
~ which allows much of the theory of Boolean matrices to be included in our
general theory.
Spaces and matrices are introduced in Chapter 5, and their duals in Chapter 6,
together with principles of isotonicity of matrix algebra.
In Chapter 7 the foundations are laid for a theory of conjugacy: for each
matrix ~ we define its conjugate ~*, and it turns out in Chapter 8 that ~ and
~* induce transformations which are each other's inverse, or more accurately,
each other's residual. Chapter 9 applies these ideas to scheduling theory,
and Chapter 10 exhibits the mappings induced by A and A* as residuated linear
transformations of spaces of ntuples, or as we shall say: residuomorphisms.
The twentyeight chapters are each organised into titled sections, numbered within
chapters. Thus, e.g.: 132. Compatible Trisections, being the second section
of Chapter 13. Algebraic expressions within the text are indexed e.g.: (27),
denoting the seventh indexed expression in Chapter 2.
Several dualities run through the material simultaneously, so that a given result
may have as many as eight dual forms. It is a common practice in mathematics to
present formal statements of only one of a set of dual forms of a given result.
However, because of the proliferation of dualities and the fact that in a later
argument it may be necessary to use a result in a dual form other than that in whict
it was stated and proved, we present a few of our theorems in all relevant
dual forms simultaneously, by use of tables. Of course we give a proof of only
one form.
A list of syniliols and notations is given at the end, after the references.
References are cited within the text by use of square brackets  thus e.g.: [27J .
The preparation of this typescript has been the despair of more than one
secretary, but I should like to thank Ms Vivienne Newbigging and Ms Elaine Haworth
for their repeated triumphs over my daunting untidyness.
Chapter 1 MOTIVATION
11 Introduction 1
12 Miscellaneous Examples 1
12.1 Schedule Algebra 1
12.2 Shortest Path Problems 3
12.3 The Conjugate 5
12.4 Activity Networks 6
12.5 The Assignment Problem 7
12.6 The Dual Transportation Problem 7
12.7 Boolean Matrices 8
12.8 The Stochastic Case 9
13 Conclusion: Our Present Aim 10
41 Blogs 33
42 The Principal Interpretation 34
43 The 3element Blog ® 36
44 Further Properties of Blogs 38
Chapter 5 THE SPACES EANDlJI
n V(mn
51 BandSpaces 40
52 TwoSided Spaces 41
53 Function Spaces 42
54 Matrix Algebra 45
VIII
Chapter 7 CONJUGACY
81 Preresiduation 62
82 Alternating AA* Products 63
83 Modified AA* Products 65
84 Some Bijections 68
85 A Worked Example 69
Chapter 11 TRISECTIONS
Chapter 13 /  EXISTENCE
Chapter 18 SEMINORMS ON E
n
181 ColumnSpaces 147
182 Semi norms 148
183 Spaces of Bounded Semi norm 150
X
Chapter 21 PROJECTIONS
11. Introduction
This expresses the fact that machine 3 must wait to begin its (r+l)st cycle until
2
th
machines 1 and 2 have both finished their r cycle, the symbol xi (r) denoting the
starting time of the rth cycle of machine i, and the symbol tier) denoting the
corresponding activity duration. This mode of analysis gives rise to formidable
looking systems of recurrence relations:
Xi (r + 1) = max ("1 (r) + ail (r), .... xn (r) + a in (r», (ial, .... nJ (12)
where. for notational uniformity. terms a .. (r) and x.(r) are made to occur for all
l.J J
i = 1 ••••• n and all j = 1 •••• , n by introducing where necessary quantities
a. . (r) equal to "minus infinity" for each combination (i.j) which has no physical
l.J
significance; the operator ~ will then "ignore" these terms.
x e y
x~y
instead of
instead of
max(x.y)
x + y } (13)
~(r) [a .. (r)]
l.J
~(r) [Xi (r)J
Formalisms of this kind were developed by GifHer [5] and CuninghameGreen [4J.
Expression (15) is a very intuitive "changeofstate" equation. By
iteration we have:
showing how the state ~(r) of the system evolves with time, from a given initial
s tate ~(l). under the action of the operator represented by the matrix A.
For simplicity of exposition assume for the moment that the quantities
a .. (r) are independent of r. Define Ar = ~ ~ ~ ••• 8~. r times
l.J
(associativity holds!). The "orbit" of the system is then:
2
~(1), ~ ~ ~(l), ~ ~ ~(l) . . . .
~,l. ~3,
will determine the longterm behaviour of the system: does it oscillate? Does it.
in some suitable sense. achieve a stable state?
What then is the shortest distance from i to using exactly two links?
min (17)
Clearly it is
k=l, ... , n
Now we make the following change of notation:
}
x $' y instead of min(x,y)
(18)
x 0 y ins tead of x + y
Notation (18) is thus a sort of dual to notation (13), and we may call it
"min algebra". We shall call the operation $' dual addition.
Expression (17) now becomes:
$' (d. 0 d .) (19)
1n nJ
which is exactly the inner product of row i and column j of the matrix ~ in this
notation. Formalisms of this kind occur in several of the references, includtng
for example [1], [llJ and [lY].
Clearly then the matrix ~2 gives the shortest intercity distances using
exactly two links of the network per path and by an immediate induction the
matrix!f, for integer p, gives the shortest intercity distances using exactly p
links of the network per path. Suppose that the system has no circuits with
negative total distance (Le. that ~ is a "definite" matrix in min algebra). A
moment's reflection shows that no path with more than n links can then be shorter
than a path with n or less links since a path with more than n links must contain
a circuit, which can then be deleted to give a path between the same endpoints as
before but with fewer links and no greater length. Hence for each pair (i,j), the
shortest path from i to j is the (i,j)th element of one of the matrices ~, ~2,
Dn.
Define the dual addition of matrices using the operation $' in the obvious
componentwise fashion: [Xi) $' [Yij] = [x ij 6)' Yij]' and consider the matrix
r'(D) defined by:
  I' (~) = ~ 6)' l $' . . . $ Dn (110)
Since the shortest path from i to j is the shortest of the paths from i to j
using one link, or two links, ••• , or n links, and bearing in mind the meaning of
the symbol $', we see that I' (~) is the shortes t path matrix for the ne twork for
which ~ gives the direct distances.
Actually, expression (110) certainly does not suggest the best way of calculating
I' (~), for which one would use one of the established shortestpath algorithms
given for example in [391. However, expression (110) is important· in the
algebraic theory, and is discussed (for defini te matrices) in a number of the
references. In Chapter 23 below we show that all the fundamental eigenvectors of D
occur as columns of I' (~). In [17], Carr~ shows how the elements of I' (~) may be
computed using an analogue of a GaussSeidel iteration.
5
E.;
•
•
Proposition 12. Problem (112) has the solution x A* 3' moreover this is
No proofs of these facts are given in [25J and it is in fact one aim of the present
memorandum to furnish these proofs. Our immediate purpose. however. is to
illustrate how the investigation of the operational problems formulated in (111)
and (112) gives rise to a fullyfledged theory of conjugacy analogous to that of
conventional linear algebra.
~J 
J
But if we define the matrix A = [a .. and the vector t = [t.] we see that
 ~
(114) is just ~ = A~' ~ in the notation of (18) i.e. we are led again to the
max~
a £ i=l
n
can be wri tten:
a
L$
£ ~
{ ~
i=l
@ a. (')}
~
(116)
in
l.(J
n m
Minimise 1: dl j  1: p.x.
1. 1.
j~l i~l
But (118) is just a set of linear relations when rewritten in the notation (18)
n
x.1. = 1: i' (i = 1. . •.• m)
j=l
which we can write in vectormatrix notation:
~ = .£ 8' ~ (119)
The dual transportation problem may thus be reformulated as a problem in min
algebra. with linear constraints. It can be shown that the set of feasible
solutions for this dual prob lem form a space.
Instead of considering the entire system of real numbers. we could take some
additive subsemigroup of the real numbers such as the rational numbers. or the
integers. since the operations ~ and i then retain their properties and meanings.
In particular it is natural in many situations to restrict ourselves to the non
negative real numbers when we are formulating problems of distance. time or cost.
A more fundamental step is to consider the subsemigroup consisting of the
two elements 0 and  for which the operations i and ~. interpreted as usual as
max and +. have the following operation tables:
x y x i Y x~y
00 0 0 <0
0 00 0 00
0 0 0 0
operations $ and 6 as Boolean sum and product respectively. With a little care,
therefore, we can develop a formalism which covers useful portions of the theory
of Boolean matrices also.
For example, let there be given an abstract directed graph with n nodes.
Let the(n x n)matrix .£ = [d ij ] have d .. = 0 if there is a directed arc from node i
1.J
to node j, otherwise d ..
1.J
= "". Clearly E. plays the role of the familiar adjacency
matrix of the graph  see e.g • [30J or [34J. Form the successive powers E.2 , E.3 ,
••• , in max algebra (notation (13» and define:
2 n
.!:.(.£) = E. $.£
IB '" IB .£ (120)
(Obviously, expressions (110) and (120) are formal duals of one another. We are
using min algebra in (110) and max algebra in (120).) It is not difficult to see
that element (i,j) of the matrix nP is now the greatest arithmetical sum of the
form:
i.e. that element (i,j) of the matrix nP is zero if and only if there exists a
directed path having exactly p arcs from node i to node j. Hence the matrix .!:.(~
in expression (120) has element (i,j) zero if and only if the graph contains a
directed path from node i to node j.
These ideas are extensively discussed in e.g. [30J and ijl4'] , using the conventional
notation of Boolean algebra; our present aim is merely to illustrate that certain
aspects of Boolean matrix theory may be subsumed under the formalism of max
algebra.
and:
!:.&l &
for "definite" matrices ([6J, [11]). Some analysis of the equation!:. 8 .!. = b and
of the eigenproblem !:. ~ .!. = A8 .!. is also possible ([4'], [!]).
12
We begin, therefore, with the axioms of Fig.21 in order to cover the basic pro
perties of scalars and operators with the same set of propositions. For most of the
theory we do not need to assume that the multiplication of scalars is commutative.
There are, of course, many different ways of deducing the elementary properties
of systems satisfying axioms such as those of Fig.21. In order to give cohesion
to the material, and lay the foundations for a theory of operators, we shall make
extensive use of the idea of homomorphism.
Let S be a set which is linearly ordered under a binary relation? i.e. for
which the following axioms YO' YI , Y2 hold:
¥x,ysS, either x ~ y or y ~ x
Y2 : ¥x,y,zsS, ifx>..yandy>..zthenx~z
From YO we infer:
Y3 : ¥ x s S, x >.. x.
¥ x, Y s S, x @ y = x if x >, Y
Y otherwise.
Trivially, we have:
Xo: ¥ x, Y s S, x @ y = x or x @ y = y
and it is well known (and very easy to prove) that the system (S, @) is an abelian
semigroup in which every element is idempotent, i.e.:
~ ¥ x, y, z s S, x @ (y @ z) = (x @ y) @ z
¥ x, Y s S, x @ Y Y $ x
and ¥ x s S, x $ x = x
Proposition 21. If (S, $) is a commutative band, then the binary related>, defined
on S by means of the definition:
14
and is a partial ordering of S in the sense that axioms Yl , Y2 and Y3 are satisfied.
If, moreover, axiom Xo holds, then axiom YO is satisfied so that S becomes a
linearly ordered set under the relation ~. •
We shall call a commutative band linear if axiom Xo holds.
Clearly any subset of a linear commutative band is a linear commutative band under
the same operation ~.
The terms linearly ordered set and linear commutative band are thus equivalent in
the sense made clear above. Given the relation ~ we shall speak of the
corresponding operation $, and vice versa. For anyone given structure, we shall
employ both notations indifferently and without further comment.
Let F consist of the finite real numbers, under the usual ordering~. The
corresponding operation is max, where:
max(x,y) x i f x <l Y
y otherwise.
The system (F,max) is a commutative band, and so is the system (F,min) , where
min(x,y) = x if y <l x.
y otherwise.
Both the systems (F,max) and (F,min) give examples of linear commutative bands, as
also do the systems (R,max) and (R,min) where R = FU{~}U{+oo} denotes the extended
real numbers. These latter two systems will be called the principal interpretation
and the dual principal interpretation respectively (of axioms Xl' X2 , X3 ).
The set of all subsets of a given set n provides an example of a commutative
band which is not linear. The operation a is settheoretic union, and the
relation ~ is setinclusion.
If a subset S of a commutative band T is itself a commutative band under the
same operation $, we shall say that (S,e) is a commutative subband of (T,a).
If S, U, are given sets then the notation SU will denote the set of all
functions from U to S. IfU = {I, 2, •••• n} is the set of the first n natural
numbers, then we shall write Sn as an alternative to SUo If U is an arbi trary se t
and S is a cODBllutative band, then the set SU also becomes a commutative band when,
for each pair f, g of such functions we define:
y x E U, (f $ g) (x) = f(x) e g(x) (22)
provided as usual that two functions which take identical values in S for every XEU
are regarded as equal. Corresponding to the operation a introduced by (22), the
15
" x e: S, is(x) x
For each given element a of a commutative band S, we m8¥ define the translation
of S induced by a as the function fa e: SS such that:
" x e: S, (25)
We ma;v also regard the operation & as defining a fmction from 8 2 to 8, namely:
•
&: (x,y) .. x $ y
If Y
If w
~
~
z
x
then
and y ~ z then w & y 7 x & z } (26)
Now let 8 be a given set, and let K be a collection of functions, each of which
is from ~ to 8, for some integer m ~ 1 (not assumed to be the same for each function).
By f, the composition algebra generated by K, we understand the smallest class of
functions having the following properties:_
(i) KCK
r
(ii) If f e K and fl, ••• ,fr e K are such that f E 8(8 ) and
(8mi) .
fi E 8 (1.=l, ••• ,r) then f(fl, •.. ,f r ) e K, where
r
.t mi
f(fl, ••• ,f r ) is by definition a function from 81.=1 to 8 such that for
m.
all x. e 8 1. (i=l, ••• ,r) we have
1.
"
In other words, K is precisely the collection of fmctions each of which can be
defined as a fixed program of applications of the given elements of K.
Each element of K
is a function from 8m to 8 for some integer m ~ 1.
A straightforward induction on the number of function applications necessary to
create a given element of ~, yields the following result:
Proposi tion 23. Let 8 be a commutative band and let K" be a collection of f\mctions,
each of which is from ~ to 8 for some integer m ~ 1. If all elements of K are homo
morphisms. then so are all elements of
/\
1. If all elements of K are isotone, then so
are all elements of K.
X4 : ¥ x, y, Z E V, X @ (y @ z) = (x @ y) @ z
X5: ¥ x, y, Z E V, X @ ( y $ z) (x @ y) $ (x @ z)
X6 : ¥ x, y, Z E V, (y $ z) ® x (y @ x) $ (z @ x)
Yoeli [I:l] uses the term Qsemirings for certain algebraic structures of this
type. Such structures are undoubtedly semi rings as defined in [ 6] , although
the idempotency of all elements under the operation $ gives them a very special
character. Backhouse and Carr~ speak of regular algebras, with reference to the
relevance of these structures to the theory of regular expressions.
Since X4 defines (V, e) to be a semigroup, with distributive laws X5 and X6 ,
it would also be appropriate to call (V, $, @) a associative msemilattice, or
a semilattice  ordered semigroup (see [27], [321).
The terms semiring, regular and algebra are, however, already so overloaded
wi th disparate connotations in mathematics, and terms such as semilatticeordered
semigroup are so unhandy, that it seems appropriate to coin a new term for
structures satisfYing Xl to X6 •
We shall call them ~, evoking thereby a little of the flavours of the words ring
an.d band. The term subbelt will be used in the obvious wa:y.
If, moreover, (V, $) is a linear band, we shall say that (V, $, e) is a linear
belt. Obviously, if (V, $, @) is a linear belt and (U, @) is a sub semi group of
(V, e), then (U, $, @) is a linear subbelt of (V, $, e).
It is easy to verifY that every commutative band is already a belt, if we take
the operation ® to be identical with the operation $. We may then speak of a
degenerate belt.
As an example of a nondegenerate nonlinear belt, consider the set of all
subsets of some multiplicative semigroup (S, @), with $ interpreted as settheoretic
union, and Iil as a setproduct:
¥ P, Q E S, P @ Q = {p @q I PEP, q E Q}
The principal interpretation of axioms Xl to X6 will be the system (R, max, +),
i.e. the extended real numbers, under the operations max and, arithmetical addition.
The dual principal interpretation of axioms Xl to X6 will be the system (R, min, +).
These interpretations give examples of a linear belt. Other examples are given
in [17] and [21 1.
25. Belt Homomorphisms
If S is a co=utative band, let W be the set of all endomorphisms of S. Then
18
W is a!!!!ll if for all f, g & W we define f e g & W as in (22) and the product
f . g & W as the composition of f and g, i.e.:
In particular, for each given element a of a belt (V,., e), we m8\Y define the
left multiplication induced by a as the function ga & vV such that:
a8x (28)
T : a... ga
•
of a linearly ordered set to itself. the system of functions being closed under com
positiDn.
But this is e(w,y) >.. 8(x,z). We now apply Proposition 23 to derive the fallowing
result.
Proposition 25: Let (V ••• 8) be a belt. and let K consist of e. tOgether with the
identity mapping and all left multiplications. right multiplications and translations.
Then K. the composition algebra e;enerated by K. consists entirely of bandhomomorphisms.
•
and hence of isotone functions. If , is now adjoined to K. then ~ still consists of
isotone functions.
19
)
If w >, x then y@w>,y®x
and w®y>..x®y (29)
If w >.. x and y>,z then w@y>,x~z
x~y=y@x
A belt which satisfies axiom X7 will be called a commutative belt. The principal
interpretation (R,max,+) defines a commutative belt. On the other hand, the system
of monotone nondecreasing realvalued functions of one real variable, closed under
composition, is clearly a noncommutative belt since in general f(g(x)) ~ g(f(x)).
Xa: 3¢ £ V such that the products ¢ ~ x and x @ ¢ both equal x,
for all x £ V.
A belt satisfying axiom Xa will be called a belt with identity. The symbol ¢ is used
for the identity element because it is suggestive of the arithmetical zero which is
the identity element in the principal interpretation (R,max,+).
X9: 'i x £ v,3x' £ V such that x ® x' equals ¢.
A belt satiSfying axioms Xa and X9 will be called a division belt (by analogy with
division ring). We infer as usual that the multiplicative inverse x' of x is two
sided and unique, that (x'), = x, that ¢' = ¢, and that (x ® y)' = y' @ x', etc.
The system (F, max, +) supplies an example of a commutative division belt.
The monotone st1ictlY increasing realvalued functions of one real variable form a
noncommutative division belt under max and composition.
From now on we use the standard notation x l to denote the unique twosided
multiplicative inverse of x.
For a division belt V, we note:
'V x, y V, X >, Y if and only if yl >, x l
£ (210)
(From (x $ y) = x on leftmultiplying by x l and rightmultiplying by yl, we obtain
yl $ x l = yl, and similarly for the converse).
XlO: j e £ V such that:for all x £ V, X $ e = x, and x ® e, e ® x both equal e.
An element (necessarily unique) which satisfies axiom X10 is called a ~ or a
null. It is easly verified that if S = FU{oo} consists of the real numbers extended
by 00, then the system (S ,max, +) has the element 00 as. a null.
Of course, L3 presents the lattice absorption laws, which taken in conjunction with
the fact that S is a semilattice under IB' and a semilattice under IB, give necessary
and sufficient conditions that (S, Et, i') be a lattice [32]. We call the operation
IB' a dual addition. By analogy with (2l), we can define a binary relation ~
corresponding to i' •
As is wellknown, the force of the lattice absorption laws is that the partial
order under the relation >', and the partial order under the relation ~ are consistent:
Hence a mapping which is isotone with respect to either of ~ and! is isotone with
respect to the other, and we shall merely s~ that it is isotone.
We shall use whichever of the symbols >, or ~ is typographically convenient in a
given context. We note en passant that from L3 and (2ll) there holds for all x, YES:
x i y ~ x ~ x Et' y (2l2)
f'
a : x'" a i' x
and dual addition:
Proposition 26. Let (S, Et, i') satisfY axioms Ll , L2 , L3 , and let K consist of i,
i', together with the identity mapping and all translations and dual translations.
Then t, the composition algebra generated by K, consists entirely of isotone functions.tIt
If
If
w
w>,x
~ x then
and
y Et' w
Y )j. Z
>, y Et' x
then w i' y >, x .' z
} (2l3)
21
a $(x $' y)
a$'(x$y)
(a $ x) $' (a $ y)
(a $' x) $ (a &' y)
(Va, x, y e: S)
} (214)
These are, of course, the lattice distributive laws.. It is known that either of
(214) in the presence of Ll , L2 , L3 , implies the other, and (S, $, $') is then a
distributive lattice. We shall not assume in general that our systems (S, $, $') form
distributive lattices. However, the following distributive ineg,uali ti e s always hold
for all a, x, Y E: (S, $, &,):
a $ (x $' y) (a $ x) &' (a $ y) )
j
~
(215)
a $' (x $ y) ~ (a $' x) & (a $' y)
These inequalities are standard latticetheoretic results, but it is useful from our
point of view to see them as consequences of the first two clauses of the following
lemma:
Lemma 27. Let (S, $, $') and (T, $, $') both satisfY Ll , L2 , L3 • If f: (S,$) 7 (T,$)
is a homomorphism then for all x, y e: S there holds:
(since f is a homomorphism)
.. x $ Y
= fl(f(X)) $ fl(f(y))
= x $' Y
Applying the isotone function f to this inequality, we obtain
Hence f(x It' y) f(x) It' f(y) in this case. The argument for g is dual.
•
If, for a given commutative band (S, (8) it is possible to find a binary ccm
bining operation $' such that Ll , L2 , L3 are satisfied, we shall sa;y that S has a
duality, or that a dual addition is defined. It is not difficult to show that this
can be done in at most one way. If (S, It) is linear, then it is easy to verify that
a dual addition is defined in the obvious way by:
}
x It' Y Y if x It Y = x
(216)
x otherwise
The resulting system is a distributive lattice.
We shall call the operations It' and 8' a dual addition and dual multiplication
respectively. As before, any axiom or expression or line of reasoning will be
called the dual of another if the one can be formally derived from the other by
systematically interchanging the symbols $ and It', the symbols Q and 8' and the
symbols ~ and ~. Obviously the dual of any statement deducible from Ml , ~ and L3
is also so deducible.
Since left multiplications are endomorphisms of (V, It), Lemma 27 gives for a
system satiSfying Ml , M2 and L3:
v x, y Z £. V: x 8(y It' z) (x 8 y) $' (x 8 z)
~
}
and dually x 8' (y It z) (x ~, y) It (x 8' z)
~
Proposition 28. Let (V, It, 8, It' 8') satisfY axioms ~, M2 and L3 , and let K
consist of It, It'. 8, 8' together with the identity mapping and all translations,
dual translations, left and right multiplications and dual left and right multipli
cations. Then K, the composition algebra generated by K, consists entirely of
isotone functions. •
A number of trivial interpretations of ~, ~, L3 are possible. If (V, It, It')
23
a ~ b " a ~, b (218)
c~'d"c~d
Relations (218) also exclude the possibility of (V, ~, $') being a trivial
lattice, i.e. a lattice in which ~ and ~, coincide, since for such a lattice the
relations L3 immediately yield:
(219)
defines a semilattice operation, such that (V, ~, ~,) becomes a (distributive)
lattice. The relations (217) are all satisfied as equalities, when 8' is taken as
8. In our terms, V acquires a duality with a selfdual multiplication.
Conversely, if (V, ~, S) is a nondegenerate division belt having a duality with
selfdual multiplication, then V is a latticeordered group and from the theory of
such groups, it is known that the relation (219) holds between the two semilattice
operations. Summarising:
T {x x e: S, x ~ b}
U {x x e: S, a ( x}
V {x x e: S, a ~ x ~ b} = Tn u
Further, if S is a linear commutative band, so are T, U, V. And if a< b then W, X,
Yare also linear cOllllllutative bands, where:
W {x x e: S, x < b}
X {x x e: S, a < x}
Y {x x e: S, a < x < b} = Wn X.
In general, if S is a belt, we cannot immediately infer that T, U etc are also belts.
However, if S is a belt with an identity element ¢, let us sEf¥ that a e: S is ~
In this memorandum we shall use the symbols I and II, precisely as in elementary
algebra, to denote iterated sums and products using the operations $, $', iii and @' •
For example, if u l ' u 2 ' .•• , un arengiven (not necessarily distinct) elements of a
commutative band, we shall write I
$U. to denote the sum:
i=l ~
n
and L $' u. to denote the sum:
i=l ~
n 1 n
I 9) x.)
L $' x.
J
1
(31)
j=l J j=l
Suppose now that Xl' ••• ,xn and yl' .... 'Yn are (not necessarily distinct)
elements of a commutative band, such that:
(34)
26
And if a dual multiplication is defined. we induce from the dual of (29) that:
n n
II ~, x. ~ II~, y. (37)
i=l ~ i=l ~
Suppose S and T are finite subsets of a commutative band. say S = {xl •• ••• x m}
and T = {y 1" ••• y n}' such that x ~ y for all XES. yET. If m ~ n. then we may
"fill" S with (nm) further copies of xl; if m ~ n then we may "fill" T with (mn)
further copies of Yl' Applying (34) and (35). we have:
and if the dual addition is defined. then also I It' x ~ I $' Y • (39)
xES yET
x ~ I It Y (312)
YET
Relations (34) to (313) are of course all trivial, and (38) and (39) are
weak as well as trivial. For the principal interpretation, (38) says, for example,
that if all x's are greater than all y's then the maximum x is greater than the
maximum y. Obviously, we can derive the stronger result that the minimum x is greater
than the maximum y, and it is instructive to do this using the principle embodied in
the following proposition, which is a verbal restatement of the inequalities (33)
to (37) inclusive.
•
governing each side of the original inequality, with the index in both cases running
over the same finite subset of V as for the universal quantifier.
We shall call this manipulation principle closing with respect to the index in
question. It will of course be obvious from the context which sigma or pi we use to
do the closing.
In this connection, we may note that (32) implies the following:
In closing an inequality with respect to a given index, using either of the
operators L@ £EL@" it is only necessary to apply the operator to a given side of
the inequality if the given index actually occurs therein (other than as dummy variable).
We consider first some very simple examples designed to illustrate these
manipulation principles. Same much less trivial applications will then be given.
Accordingly, let S, T be finite subsets of a commutative band in which a dual
addition is defined, and suppose:
v x e: S, y e: T, x ~ y
For each fixed x e: S, we may close with respect to y, to obtain:
vx e: S, x ~ L@ Y
ye:T
Since LED y is some fixed element, we may now close with respect to x to
ye:T
obtain the intuitive result:
L @' x ~ L@ y (314)
xe: S ye: T
As a further example, let S be any finite set. We obtain from (21), using the
definition of L@:
¥ x e: S, L ED x > x (315 )
xe:S
Closing with respect to x:
L@ x > L ED' x (316)
xe:S xe:.S
28
¥x e S, x ~ I ft' x (3l7)
xeS
¥x e s,
xeS
I $ x ~ x ~ I
xeS
$' x (3l8)
which expresses the intuitive fact that S is bounded by its maximum and minimum.
The following theorems illustrate the use of the principle of closing. They also
illustrate the use of a double suffix notation.
Theorem 32. Let w .. (i=l, ••• ,m; j=l, ... ,n) be ron arbitrary elements of a belt.
  l.J 
m n 11\ m
~ P >,. Q, where P = II e( I
$ w .. ) and Q>= L '" ( II .. w .. ). Moreover, if a duality
i=l j=l l.J   j>=l'" i=l" l.J
is defined, we have:
n m m n
S I !D' ( II ~ w .. ) and T II S ( I
$' w.. )
j=l i=l l.J i=l j=l l.J
and where P', Q', S', T' are derived from P, Q, S, T respectively by replacing S by 8'.
n
V, i=l, ... ,m; j=l, ... ,n,
j=l
I !D w.. >, wiJ'
l.J
Closing w.r.t i
m n m
¥ j=l, ••• ,n, I II s
l..=IIl & ( j=l !D wiJ,) >, i=l W iJ'
Closing w.r.t.j
m n n m
n (I $ w.. ) ~ I
(II S w .. )
i=lS j=l l.J j=l$ i=l l.J
m
Now in (316) take S to be the set of products 1) e Wij (j=l, ••. ,n), giving:
i=l
n m n m
L (TIll w .. ) 3 L @' ( TI ~ w .. )
j=lED i=l ~J j=l i=l ~J
If m=2, we can in particular cases interpolate a term between Q and S (Q' and S' ),
as we now show.
~ Q' ~ S' ~ T'
•
Theorem 33. Suppose m=2 in Theorem 32. Then Q?R and R'q S' , where:
n n n n
either R ( .I ~
J=l
wI') Il (
J .L ~' w2j ) QLR (
L ED'
j=l
wlj ) e ( L~
j=l
w2 ·)
J
J=l
n n
and either R' ( L ED wI .) Il' ( LED' w2j )
j=l J j=l
Furthermore, if the elements w.. are drawn from abel t wi th duality which either is
~J
Proof: There are two interpretations for R, and two for R'. Let us first consider
the case:
n n
j~lED (wlj e j~lED' w2j )
(using induction on x6 ) (319)
Now by (317):
n
''fj=l, ••• ,n, wIJ' Il w2J' ~ WIJ.@LED'W 2J.
j=l
Closing w. r. t. j, we obtain Q q R. We must now prove R >, S, under the given extra
assumptions.
If mUltiplication is self dual, then a dual form of the foregoing argument will
30
suffice. So let us assume now that the belt is linear. Then there are indices
J(~J~n) and K(~~n) such that:
n
L
j=l$
w. and w2K
lJ
(by (317)).
But this is: R ~ S. Hence Q >, R >, S. The other case for R is handled similarly,
and similarly Q' ~ R' >., S' for both interpretations for R' •
Recall now from Proposition 29 that a division belt can always be given a
duality with selfdual multiplication.
Theorem 34.In a division belt. we have for all a j , b. (j=l, ... ,n):
J
n 1 n n 1 n
L
j=l
$' (a. (O b:),(L$a.) ~ (L$b.) ~
J J j=l J j=l J j=l$
(a j 3 bh
J L
n n 1
and L ~ L ${a: 3 b.)
j=l $' j=l J J
•
Proof. The first pair of relations follow from Theorem 33 on writing wlj a. and
1 J
w2j = b j and using (31). The second pair are proved similarly.
We ma;y take this further. Suppose in (315) we apply separately to the two
expressions x and L$ x identical sequences of translations, left and right multi
XES
plications, dual translations and dual left and right multiplications.
The validity of the inequali ty ~ is preserved because, as observed in
Proposition 28, multiplications and translations and their duals are isotone. We
arrive in each case at an inequality a >,. e between two expressions such that e can
31
be formally obtained from a by deleting the operator Ef&' We infer the manipulation
principles embodied in the following pr0position.
1 1 1 t' 1 1 1 1 1
V x e: S. (a $' b) $ (c 8 £. $x) ~ (a $' b) $ (c II x)
xe:S
Proof. Consider
Opening w.r.t. k:
~ I 1 1 1 1
¥ i=l, ••• , m; j=l, ••• , n; k=l, ••• , m, [a iJ· ~ 2.., (D' \~' 3 b k )] 3 b i 4 a iJ· 3 ~J' @bk@b i
k=l J
r
m
1
¥ i=l, ••• ,m; j=l, ••• ,n, P " [a .. ~ (D' (a.;:~ 9 bkl] 8 b .•
1.
1.J k=l J
Now, for a linear division belt, the sum in the last expression is e~ual to one of
the summands. Hence in this case:
1 1
¥ i=l, ••• ,m; j=l, ••• ,n, P ~ a ij 8 ~(j)j 8 bk(j) 9 b i
where k(j) is the index for which a.;:~ 9 bk achieves its minimum for given j •
The products  00
I f x e: G U
If x e: G U {+
e
{  oo} then x
+ 00 and + 00
oo}then x @ +00
@ 
@ 
+00
00 both equal
iii x =  00 by defini tion
@ x =+ ~ by definition
} (41)
If we call the elements of G finite elements, then definitions (41) extend the
scope of the operation ~ by means of the rule "finite element times infinite element
infinite element" together with the conventional rule" 00 times +00 = 00" This last
ensures that 00 acts as a null element in the entire system (W ,1Il,@) but constitutes
an asymmetry between and +00 which we redress by intrOducing a dual multiplication
@' which acts exactly like e excepc that we stipulate: _00 @' + 00 = + 00 1iI'00 = +00.
(See (37]where this idea is presented, but in a different notation).
For convenience we set out all the definitions in tabular form below.
Fig 41
we see that ~ is an identity element for the whole of W with respect to both 8 and
01'. The inverse of an element x EGis as usual written xI. In many practical
applications, multiplication in G will be comnutat ive , but we do not need to assume
this yet in the abstract argument. Most of our results are essentially extendable to
any latticeordered semigroup which satisfies any of the criteria for embeddability
in a group (see e.g. [27J).
By a direct verification of the relevant axioms, and an appeal to the discussion
of Section 28, we can confirm the following:
Proposition 41. The system (W, Et, 8 , I!l~ S') is a belt having a duality, and
containing as subbelts the systems (GU {co}, Et,.i 'i') and (G U {+co}, Et, S ,ft'), which
are belts having a duality with selfdual multiplication, and the system (G, Et, 3 ,fIJ')
which is a division belt having a duAlity(with selfdual multiplicatiom. In all these
systems, ¢ acts as identity element with respect to both ~ and @', whilst +00 and
(when present} acts as nullelements with respect to Sand 01' respectively. 4t
Notice that we do not exclude the possibility that G m~ be the trivial group {~},
in which case the belts mentioned in Proposition 41 m~ well be degenerate, or have
trivial dualities.
By a direct consideration of cases, we can confirm that a blog satisfies the
following "associative inequalities". These will be important in our later dis
cussions, when we shall often refer to them as axiom X12 .
Proposition 42. Let W(Et, @ ,fIJ', S') be a blog. Then for all x,y,ZE W we have:
x @ (y @' z) (: (x S y) ~, Z (42)
J
x .
12' X Ql' (y @ z) >,. (x ~, y) ~ z 4t
(Inequalities X12 occur,in another notation, in [37J)
Proposition 43. Let (WaEta @ zE&' I: S' ) be a bloEi and let XI l.. E W. Then: If x ~y=+oo
Proposition 44. The principal interpretation is a linear belt and satisfies Xz ' Xe,
35
7 Neg.reals with » }
1
8 Neg.rationals with ~ Belt Yes
9 Neg.integers with ~
10 Nonpos.reals with »
11 Nonpos.rationals with ~ Belt Yes
12
13
14
Nonpos.integers with
Belt Yes
1
15 Pos.integers with ~
16 Nonneg.reals with....
17 Nonneg.rationals with ~ Belt Yes v v
18 Nonneg.integers with ~
00
00

+00
III
00
III
+00
00
00
00
00
00
00
00
00
+00
rJ 00 III 00 00 00 Fig
rJ III III III III III 43
rJ +00 +00 ~ +00 +00
+00 00 +00 00 00 +00
+00 III +00 ~ +00 +00
+00 +00 +00 +00 +00 +00
~. If W is a blog then W contains at least three distinct elements 00, r/!, +00.
By compa.ring the tables of Figs 41 and 43, we see that the system which arises when
the operations in Ware restricted to these three elements, is isomorphic to (J). 4t
We show now that G[) is the only blog having a finite number of elements. First
we record a simple lemma, for convenience in later arguments.
Theorem 47. Let G be a division belt other ~han the oneelement division belt {Q}.
Then for eacb u £. G we can find v £ G such tbat v > u.
fIoor. By hypothesis WE can find a £G with a # (0. Now a $ r/! ~ r/!, giving two possi
bilities:
(i) a $ r/! > r/!
(ii) a $ 0 r/!, so Q ~ a. But 0 # a, whecce 0 > a.
In case (i), define = a $ r/!; in case (ii) define S = aI. Then in either case (using
Lemma 46 for case (ii)) we have found S > r/J. If now u l. G is arbitrary we have by (29):
S ~ u } 0 ~ u = u
But we cannot have S ~ u u, else:
S (S ~ u) ~ u l 0, contradicting S > O.
Hence S Ql u > u. 4t
Corollary 48. The only division belt having a finite number of elements is the one
ele~ent division belt {O} and the only blog having a finite number of elements is the
3element blo~.
u = L$ x.
XlCG
Proof. Certainly, (y S' z) ~ (y S z) since these products are actually equal in all
cases except where one of y, z is +00 and the other is ~, when we have
(y S' z) = +~ > ~ = (y S z). Hence by (29):
1 1
u2 = ul S (u l Su 2 ) u 1
l S (vl Sv 2 ) = (ul Sv l ) ~ v2 ~ ~ ~ v2 v2
•
But this contradicts u 2 > v 2 · Hence u l S u 2 cannot equal vI ~ v 2 so the only
possiblity is that u l S u 2 > vI ~ v 2 · The other case is proved similarly.
39
n n
IT S u. > IT S vi
i=l 1 i=l
n
Hence if vi:S 0 (i=l, . .. , n) and IT v. =0 then V.
1
(i=l, ... , n)
i=l@ 1
•
Proof. The first assertion follows from Lemma 411 by an obvious iteration, and the
second follows by then taking u
1
= = u
n
0.
Finally, the following result will be useful when we discuss conjugation later.
Lemma 413. Let (W,Ql ,S,Ql' ,@') be a blog. For each xe:W, define x*e:w as follows:
~'~ 1
If x is finite then x x
If x then x;': = ~
't.';
If x +00 then x =
~': ~';
Then: (i) "of x e: W, x @x = x S x
0 if x is finite and otherwise.
)'; ,'~
(ii) "of x e: W, x @' x = X S· x
0 if x is finite and +00 otherwise.
x ~ y, x
,;
S Y ~ (IJ; y @ x
,;
~ 0; X @' Y
{, ? (IJ; y
* @' x ? 0
Proof. Relations (i) and (ii) follow directly from the definition of a blog. Hence
{, ~'; 1: ,,;
we may write: x S x = x S x {' 0 ~ X @' X = X S· x (43)
x @Y"iX Sx
~ 0 (by (43))
{,
Conversely, if X @Y ~ 0 we may leftmultiply by x to give:
•
Evidently, (44) and (45) imply x>.y. Hence x;:;' y if and only if
The equivalence of the other relations is proved similarly.
5. THE SPACES En ANDJ'f.rnn
51. BandSpaces
Let V = (V, $, 9) be a belt. Let T = (T, $) be a given system and let there
be defined a rightmultiplication of elements of T by elements of V, that is to
say a binary combining operation 9 such that X,A with x 9 A e: T for each pair
x e: T and A e: V. We shall say that (T,ED) (or just T) is a right bandspace,
or briefly a space (~ (V, $, 9) or briefly ~ V) if the following axioms
hold:
S3:
S4:
'f
'f
x, y e: T and
x e: T and 'f A,
'f A e: V,
lJ e: V,
(x $ y)
x 9 (A $
"A
lJ)
(x 9 A) $ (y
(x 9 A) $
" A)
lJ)
(x "
Such spaces, then, play the role of vectorspaces in our theory. We have
followed the usual algebraic practice of using the same symbols $ and " to
denote op~rations within V, within T and between T and V. We assume further:
Proposition 5l. Let (V, $, 9) be a belt. Then (V, $) is a space over (V, $, 9).
If (T, $) is an:/: space over (V, $, 9) then (V, $, 9) has a representation as a
•
belt of endomorphisms of (T, $). Finall:/: if (W, $, 9) is a subbelt of (V, $, 9)
then (T, $) is also a space over (W, $, 9).
If (S, $) and (T, $) are both spaces over V, we shall call a homomorphism
g: (S, $) ~ (T, $) rightlinear (over V) if it satisfies:
Proposition 52. Let (S, ~), (T,~) be given spaces over a belt (V, ~, @). Then
(Ho~(S, T),~) is a commutative band when ~ is defined as in (22). •
If (V, ~, @) is a belt and (T, ~) a given system, let there be defined a left
multiplication of elements of T by elements of V, that is to say a binary
combining operation @ such that A&x € T for each pair A € V and x € T.
It is clear that we may formulate "left" variants of Sl to S5 and define
thereby the axioms of a left bandsEace (over V). All the arguments of Section 51
obviOUSly then go through mutatis mutandis. In particular, we may define leftlinear
homomorphisms of left band spaces by analogy with (51).
As an example of a left band space, let (T,~) be the set of all extended
realvalued functions of one extended real variable, under max, and let (V,~, &)
be the belt of all monotone nondecreasing extended realvalued functions of one
extended real variable, under max and composition. For A € V and f € T define
A&f € T as a function such that (A & f) (x) = A(f(x)) for all extended reals x.
Then (T,~) is a left bandspace over V.
Proposition 53. Let (T,~) be a right bandspace over a belt (V,~, @). Then
•
(T,~) € Ho~(T,
Notice that we shall allow only the epithet "right" to be tacit: "space",
without further modifiers, will always mean "right band space", never "left band space"
A 2sided sEace is a triple (L, T, R) such that the following axioms hold:
T3: V A € L, V X € T and V \1 € R:
Evidently (51) is the particular form of (52) for the 2sided space
•
(ii) T is a right bandspace over R and L has a representation
as a subbelt of (HoIJlR (T,T), Ql,Iil).
For a given 2sided space (L, T, R), axiom S3 implies that right
multiplication by an element of R constitutes an endomorphism of T, and so is
isotone. Similarly, leftmultiplication by an element of L is an isotone function
on T to T. Then the following principle of isotonicity for 2sided spaces follows.
actually a belt then we may adjoin its multiplication to K and K will still
•
Iil
5 3 • Function Spaces.
Let El =
(E l , Ql, Iil) be a given belt. An important class of spaces over El
is the class of function spaces, that is to say spaces in which the commutative band
(T, Ql) is actually the commutative band «E l ) U ,Ql) of all functions from some given
set U to
El , with addition defined as in (22). Such spaces are naturally
2sided since for f £ (E l ) U and A £ El we may defined f S A, A Iil f £ (El)U
respectively as functions such that:
43
\ (f 61 A) (x) f(x) 61
v X I': U,
l (A 61 f) (x) = A 61 f(x)
When discussing function spaces over a belt El , we shall often call the
elements A I': El scalars. The multiplications f ~ f S A and f ~ A S f
will then be called (right) scalar mUltiplications and left scalar multiplications
respectively.
Related to the spaces of ntuples over El are the spaces of matrices, defined
as follows. Let m, n ~ 1 be given integers. By an (m x n) matrix (over E l )
we understand (as usual) a rectangular array of elements of El arranged in m rows
and n colurrms:
r:
u
• ::: • :'"1
ml' mn
Notations such as A or [a ..
 lJ
1 will be used to denote matrices in the usual
way, and the notation {~) ij will mean: the element in row i and colurrm of
matrix A. Thus A I{A}. ]
  lJ
and J} ..
{[a ..
lJ lJ
a ...
lJ
44
Evidently if
AE,A/
v'lrnn and U is the set of mn indexpairs (1,1),.,., (m,n)
then A can be considered to be formally identical ,.ith the function hA:U'" El
defined by:
is by definition:
(53)
" ra
I.e ij
] E''''
vl.rnp' l!rb ij ] E.M
v'(pn' La ijJ @ r~b1'J:lisbYdefinition
(54)
If ~ EV¥mn and ~ EJ,{yq we shall say that ~ and ~ are conformable for addition
whenever both m = p and n = q; and conformable for A @ B whenever n = p. The
use of the same synilols for operations in both El and I/il mn parallels standard
algebraic practice.
Throughout the rest of the present publication, the notations En and tJ( mn
will be used with the meanings defined above, and the definitions will not be
repeated. When El is a blog, we shall adapt the use of the word "finite" to
apply to En and:llf. mn also  specifically, an ntuple or matrix will be called
finite if all its elements are finite.
45
We have not yet formally identified the structures formed by the sets vIl{rnn
and before doing so it is useful to establish a particular form of words in order
to economise on ,~erbiagewhen in future chapters we examine other matrix structures.
So let the letters ~ ,.1> be used in a given context in such a way that for given
integers m, n ~ 1 the notations ~rnn' irnn represent classes of (m x n) matrices
over a given belt V. (Thus in the present chapter the use of the letter c.){ in
this way has been introduced with V = El ). By the (' ,13, V)  hom  rep statement
we shall mean the following statement:
Theorem 56. Let El be a belt. Then the (~,.)(,El) homrep statement is true.
J
The fact that gA is a rightlinear homomorphism follows from the classical
verification of the following identities:
46
A8 (~ 8 A) = (A 8 B)
The following result follows immediately from Theorems 55 and 56 and
Proposition 23.
functions (as applied to together with all left and right scalar
Propostion 57 embodies what we may call the principle of isotonicity for matrix
•
operations.
The commutative band (J/fnl' @) is of course formally identical with the commutative
band (En' $). According to Theorem 56 ,En is not only a function space over EI ,
but also a space over ~nn' We see here, of course, the classical role of matrices
as linear transformations (endomorphisms) of spaces of ntuples, and we shall be
looking more closely later at the properties of these transformations.
The results of the previous section show that there are extensive formal
similarities between the properties of matrices in minimax algebra and those in
conventional matrix theory.
47
x ED e = x and x 8 e = e 8 x = e.
Theorem 5S. Let (E l , ED, 8) be a belt. Then a necessary and sufficient condition
that the belt nn' ED, 8) (n > 1) satisfy axiom Xs is that (E l , ED, 8) satisfy
both axiom Xs and axiom XIO ; and then n,ED, 8) satisfies axiom X IO also.
Conversely, suppose x = [x .. £ . hJ
 . 1] vlnn J
is a left and right identity element in
l},f • Let U E. AI have all its elements equal to a given arbitrary UE El • Then:
vlnn  vlnn
u = {X 8 U} ••
  1]
= (i,j=l, ••• ,n)
El has a (necessarily unique) twosided identity element (il (say) and Xs is verified.
Now let y E~nn have diagonal elements all equal to (il and offdiagonal
elements all equal to a given arbitrary u EEl' For i # j we have:
n
u = {X 8 V} ••
 1] ( L ED x. ) 8 u
1r
r=l
r#j
n n
L IP
r=l
x.l.r = x l.l.
•• IP L IP
r=l
x.l.r = xii (p 0 = x l.l.
••
rj!i
n
But we had already sho~ that
r=l
L IP x.
l.r
= (6. Hence xii = (6 for all i=l, ••• , n,
Now let ~ E~n have, for given i and j, the element in row i, column j
equal to a given arbitrary u E El and all other elements equal to a given arbitrary
Then:
n
u = {X 9 W}..
 l.J = \~! 09 v} $ «6 9 u) = (0 9 v) $ u
r#
El = {(6}.
If El j! {(6} and n > 1 then ~nn cannot satisfy XS' For by Theorem 5S
this would imply that El satisfied axiom XlO ' contradicting Corollary 49. •
The notation .!n for the (n x n) "unit matrix" or "identity matrix" used in
the above proof will be standard in subsequent contexts. (If, n=l, define .!n = [(6J ).
We can now discuss a question which arises naturally in connection with the
concepts introduced earlier in the chapter. In conventional linear algebra, we
can characterise linear transformations of vector spaces entirely in terms of
matrices: is the same true in the present theory? In other words, is vh(mn
isomorphic to' (E, E) for all integers m, n ~ I? The following results
Ho~
1 n m
give necesarry and sufficient conditions for this to be the case.
Theorem 510. Let El be a belt which satisfies axioms Xs and XIO • Then for all
Proof. First let p = 1, and let ~(j) E~l(j=l, ... ,n) be the jth column of
I the (n x n) identity matrix. Arguing exactly as in elementary linear algebra
n
we can show that the action of any g E HomE (J.fnl' J1. ml) on ~ nl coincides
with the action of the leftmultiplication g~ where A is the matrix given by:
Moreover, if ~, ~ EJVmn are different then their action on some ~(j) will
be different, i.e. gA ~ gB' showing that T is injective. Hence vAt mn is
If P > 1 and~, ~ E J.(mn are different, let ~ E ~np have as its first
•
column some ~(j) for which A e ~(j) ~ ~ e ~(j). Then A e C ~ B e ~, ie .
Corollary 511. Let El be a belt, and let n > 1 be a given integer. Then a
necessary and sufficient condition that for
Proof. The identity mapping iE belongs to, and acts as multiplicative identity
n
element in, the belt HOID (E, E). So for the isomorphism to hold when m = n
Ll n n 1M
in particular, it is necessary that the belt v'lnn also have a multiplicative
identity element which implies by Theorem 5S that El should satisfy axioms
In the light of Corollaries 59 and 511, we see that the isomorphism of J(mn
•
and HOID (E , E ) holds when is a blog but fails in general when
Ll n IT.
division belt. In the latter case is not surjectiye since no matrix can supply
the identity mapping. However, we shall see later when we consider residuomorphisms
in Chapter 10 that T is faithful for a class of belts which includes the division
belts.
50
Let (V,~, 8, ~I, 81 ) be a belt with duality. We shall say that a space
(T,~) over V has a duality if:
Further definitions may be made in the obvious way of leftbandspace with duality,
twosided space with duality and so on, and dual forms obtained of the results of
the present chapter.
If V is a belt with duality then the duality can be extended to any function
space over V by dualising the definitions of ~ and scalar multiplication in the
obvious way.
Another topic in the classical spirit which we shall consider later concerns
characterising of those matrix transformations which hold certain spaces fixed. We
lay the groundwork for this now. Accordingly, let (S, ffi) and (T, ffi) be given
spaces over a given belt (V, ffi, 8). Let 5<=S and ieT be given. Then we
define:
Proposition 512. Let (S, ffi), (T, ffi) be given spaces over a given belt (V, ffi, 8),
let (T, ffi) be a commutative subband of (T, ffi), and let S be a subset of S.
Then (StabV(S, T), ffi) is a commutative subband of (Homy(S, T), ffi) when ~ is
defined as in (2 2). If moreover S = T and ~  T
then (StabV(T, ffi, 8) is T),
a subbelt of (Homy(T, T), ffi, 8) when 8 denotes composition; and (T,~) is a
T) T
•
left band space over this belt when for each g £ Stabv(T, and each x £ we
define g 8 x to be g(x).
Proposition 513. Let (E l , ffi, 8) be a belt with a subbelt (F l , $, 8). For any
integers m, n, p '71 let A such that
{~}ij E FI (i=l, ... , n; consist of all (mxn) matrices
mn
A such that the leftmultiplication defined by gA: ~ H~ 8 ~,
satisfies: ~~~~
51
( ,j~ ~) is isomorphic to
Vrmn'
W~n'~m are respectively the spaces of ntuples and mtuples over Fl , ...
6. DUALITY FOR MATRICES
If (El'~' @, ~', @') is a belt with duality then the duality may be
extended to the algebra of matrices over El by using the dual operations ~', @'
instead of ~, @ in (53) and (54). Thus:
(61)
[a ij ~. b ij ] e~
V [a ..J e.N
 1.J Vtrnp
.. J eM,
, [b 1.]' I/'(pn
[a 1.j'
.. 1 @' ..J
[b 1.J is by definition
(62)
The expressions conformable for ~. and conformable for A @' B will be used in the
obvious way.
Evidently, the dual of Theorem 56 holds, which together with Theorem 56
itself yields the following:
Proposition 61. Let (El'~' @, ~', @') be a belt with duality. Then
~,~') is a function space with duality over El , and also a space with
duality over the belt nn' ~, @, ~', @') for any integers
The duals of the various statements in Proposition 57 evidently hold, and
m, n ".1.
•
with the obvious appropriate definitions of left dual~mmmultiPlication, right
dual'v~nn AJ translation, we can develop the principle
AJ multiplication and dual·vrLmn _
of isotonicity for matrix operatio~n~s~w~i~t~hd7ua~1~1.~·ty~~ernbodied in the following result.
Proposition 62. Let (El'~' @, ~', @') be a belt with duality and let K consist
of the following functions (as applied to ron):~' ~'ti~rnn' together with all
left and right scalar multiplications and dual scalar multiplications, all left./l{ mm
multiplications and left dual rommultiplications, all right nnmultiplications
and right dual nnm~ltiplications and all rnntranslations and dual ron
translations. Then K, the composition algebra entirely
•
of isotone functions. Moreover, if m  n, we may adjoin the functions 8 and 8'
to K and the foregoing statement remains true.
53
The following theorem lists some basic "mixed inequalities" which will be
useful in matrix manipulative work. We use the notation' {X} .. , with i, j
 1J
undefined, to denote any typical element of a given matrix ~, and the notation
'1 ' means "when the matrices are conformable for the relevant operations".
Theorem 63. Let El be a belt with duality. Then the following inequalities
hold for the matrices over El :
Proof. Relations (i) to (vi) are direct consequences of Lemma 27 and the fact that
multiplications, translations and their duals act as homomorphisms.
To prove inequalities X12 ' assume that the inequalities X12 hold in ~ll
(i.e. in El ) and consider:
{X 8 (Y 8' Z)} •.
 1J
= rIQl({X}'
 1r
8 {Y 8' Z} .)
  r]
= rIQl ({X}. 8 IQl'({Y}
 1r s  rs
8' {Z} .))
 sJ
We now apply the principles of opening and closing developed in Chapter 3. Opening
w.r.t. the index s:
'Is, {~8 (Y 8' Z)} .. ~ IQl ({X}. 8 ({y} 8' {Z} .)) (63)
  1J r  1r  rs  sJ
(64)
(Lr$ ({X}.

S {y} )) 8' {Z} . ~ L~ «{!}ir S {!}rs) S' {~}sj)
 rs  S]
(65)
~r r
But the express ion on the lefthan d side of (65) is just:
{X S (y S' Z)} .. ~
  ~]
L
s~'
({X S y}.
  ~s
S' {Z} .) = {(X S y) S' !}'j
 S]  ~
This implies the first half of X and the second half
12 is proved similar ly. Now
axiom X holds for E if E is blog (by Proposi tion 42)
12 or if El is a divisio n
1
belt (which has a selfdu 1 . '
al associa tive multipl ication operat~ on). Hence ~n both
these cases axiom X12 holds for El and so for ~ll which
is isomorp hic to El · tt
In manipu lative work, it is useful to have a mnemonic
to save looking up each
relation in Theorem 63 separat ely. Here is one:
Let a,a be two algebra ic express ions such that a = a would hold as an identit y
valid for any belt, if all accents were dropped , i. e.
if in both a and I! all
occurre nces of ~', 8' were replace d by ~, 9 respect ively.
If the occurre nces of
~, (resp.9 ') lie deeper in a than in a, then a ~ a identic ally.
Again:
holds identic ally in any belt. In the express ion X 9' (! ~ ~), the occurre nce
55
(x Il' Y) ~ (X Il' Z)
both occurrences of Il' are at "depth one". Hence inequality (v) holds in
Theorem 63.
We put the above rule forward purely as a mnemonic. Obviously, in more formal
dress, it could be promoted to a metamathematical theorem, but there seems little
motivation for this in the present context. As it stands, the mnemonic is intended
for application to Theorem 63 only and cannot, without further refinement, be
applied generally.
7. CONJUGACY
(linear, scalarvalued) functions defined on S. The other, which one might call
the axiomatic conjugacy approach, defines a certain involution x ~ x * on the elements
of S, subject to certain axioms. It is characteristic of the classical theory of
linear operators that these two definitions give rise to the same structures in
certain important cases.
A similar, if rather more complicated, situation arises in minimax algebra.
In the present chapter we make our first excursion into the theory of conjugacy for
minimax algebra, beginning with a discussion of axiomatic conjugacy for belts and
matrices over a belt.
Accordingly, let (V, @) and (W, al') be given commutative bands.
We shall say that (W, al') is conjugate to (V, al) if there is a function
g: V" W which satisfies the following axioms:
Nl : g is bijective
N2 : ¥ x, y s V, g(xaly) g(x)al'g(y)
In particular, if (V, al, al') is a commutative band with duality, we shall say that
(V, @, @') is selfconjugate if (V, @,) is conjugate to (V,@).
In the language of lattice theory, conjugate commutative bands V and Ware just
semilattices and g is a semilattice antiisomorphism. Evidently, V and Ware, as
partially ordered sets, isomorphic to each others dual, and the inverse bijection
gl is also an antiisomorphism. Hence conjugacy is a symmetric relation, and if U
and Ware both conjugate to V, then U and Ware isomorphic.
We are, of course, less interested in this completely trivial situation than in
that in which some algebraic structure is present. So suppose now that the systems
V and W have the further binary combining operations 3 and 3' respectively, such
that~, al, 3) and (W, @' 3') are belts. We shall say that (W, al', 3') is conjugate
to (V, @, 3) if axioms Nl , N2 and N3 hold, where axiom N3 is:
In particular, if (V, al, Qa, @', Qa') is a belt with duality, we shall say that
(V, al, ®, $', 3') is selfconjugate if (V, $', ®') is conjugate to (V, @, ®).
Evidently the bijection g satisfies N3 if and only if its inverse gl satisfies
N; (with gl in the role of g). And since it is entirely a matter of convention
which of the additions @ and $' we call "dual" addition, it is evident that conjugacy
remains a symmetric relation when its meaning is extended to belts.
Furthermore, if (U, @', ®') and (W, $', ®') are both conjugate to (V, @, ®)
there exist bijections g:V ~ W and h:V ~ U such that
g(x 3 y) = g(y) 3' g(x) and hex 3 y) = hey) ®' hex)
57
(72 )
x* XI if x is finite. tt
72. Conjugacy for Matrices
Given systems El , El* which are conjugate, we may extend the conjugacy to
matrices over El and E~ as follows. I f ! is an (mxn) matrix over El , then !* is
by definition an (nxm) matrix over El* such that:
{A* }. . = ({ A} .. ) * (1 ~ i ~ m; 1 :5 ~ n)
 lJ  Jl
In other words,!* is obtained by "transpos ing and conjugating".
.
Then if V is any collection of matrlces . V* to be
over El , we deflne
* *
V = {! I~ ~ V} (74)
El respectively such that (E l • $. 8). and (E~. $'. ~') are conjugate. then multi
plications *nn respectively. for each integer n ~l.
such that • $. Ii and conjugate. In all cases i the conjugate"""of
a given matrix! is the matrix!* defined in
Proof. Let V be the set of all matrices over El • and let g:V > V* be defined by
g: ! I+!* according to (73). It is immediately clear that the restriction of g to
"tm (for given m. n ~ 1) is a bijection. Moreover. for all i=l ••.•• 1II; j=l ••••• n;
!. 1! eJtm:
{g(A $ B)}o 0
  lJ
(by (53))
B* } 0 0
(by dual of (53))
 lJ
Hence g(!$~) g(!) $' g(1!).so N2 is satisfied for (I(mn' $). ~:n. $').
Now if multiplications ~. &' are defined for El • El* respectively. then they are
also defined *
for~.J(nn respectively. for each integer n ~ 1. via (54) and its dual.
So if N3 holds for E l • E~. we have for all i=l •.••• n; !,1! £~:
{g(A&B) } 0 0
  lJ
(by (73))
*
(k=lw
l ",({!10k
n
J
~ {1!}k o
1
))
(by (54))
(by induction on N2 )
59
i 6
L""
n
(({1!.}k J
* e' l{A}'k.l)
*
k=l'" •  J
n * *
k=l
L IB' ({12 }'k 8' {~ }kJ')
~
(by (73)
Hence g(!81!.)
So N3 is satisfied
Proposition 75.
for~, IB, 8), ~, IB', 8').
4
[~
1
o 2
A= o 1 (75)
o
61
..! = ~ ~ 1, (77)
where:
[~ ~l
0
A= 0
2 1 0
2 3
, .. , (710)
2* * * r* (rl)* *
(where the notation!::. denotes A 8' A and in general A = A 8' ! ).
  *
The sequence (710) runs backward in time. The conjugacy ~ 80! thus has an
interesting physical interpretation in this situation, and is related to the concept
of the inverse problem in the theory of machinescheduling ~3J.
We shall consider the relationship between (79) and (710) further later.
8. AA* RELATIONS
81. Preresiduation
Let El be a selfconjugate belt. We shall s~ that El is preresiduated if it
satisfies the following axiom X13:
x 8 [ I .' (a8'a* )]
a£J
>, x
} (81)
The reason for choosing the term preresiduated will emerge in Chapter 9.
¥X £ El • [L .(a8a*)]8'x ~ x
&£J (82)
~: Applying Proposition 72 to the second half of' axiom X13' we have:
* x*
(x 8 [L .' (&8' a*)] ) ~
&£J
Le. [L *
It (a8a )] 8'x* f x *
a£J
Theorem 82. All division belts. and all bloss, are preresiduated. In particular,
the principaJ. interpretation, andthe3elementblog (3) ,arepreresidUated.
~. Proposition 73 has aJ.ready established that all division belts, and all blogs,
are selfconjugate. So let J be a finite subset of a division belt or blog El , and
let a £ J, x £ El be arbitrary.
If El is a division belt, then a *8'a =~ by (71).
I f El is a blog, then a 8'a * =~ or.... by (72) and Lemma 413.
Hence in all cases J JEt ' (a*8'a) >, rJ and the first half of axiom X13 follows by (29).
operations" •
Lemma 83. Let El be a preresiduated belt and J a given finite set of matrices
over El • Let X be an arbitrary matrix over El • Then:
o {X} .)l
 sJ
This proves .s 0 !
Corollary 84.
for each integer n
~ ! ~ and the remaining cases are proved similarly.
* o * * * o A* ,. * 0' ! *
! (A 0' A*)'
  ' .! 0' (! 0 ! ); (! 0' !) (! o !)
always exist, and are all eCJ.ual to ! * .
Hence A 0 (A* 0' !) = !, and the remaining results follow similarly and dUally..
Theorem 86. Let EI be a preresiduated belt satisfYing axiom X12 , and A an
arbitrary matrix over Er Then:
* * A 0 A*
(.! 0 A ) 0' (.! 0 ! )
*
*
(.! o !) @' (! o A) = A* @A
* *
(.! 0' !*) 0 (.! ~' !)=! ~, !
(.!* S' !) 0 (A* 0' !)= ! * 0' !
Proof. It is evident that .!,!* are alwa;ys conformable for the given multiplications.
* * * (by Lemma 83)
Now, (! 0 ! ) 0' (.! 0.! ) ~ (! 0 ! )
On the other hand:
* * o A*
(! 0.!) :il ((! ~ A ) 0' !) (by axiom Xl2 in Theorem 63)
•
Hence (.! 0 A* ) @' (! @ A* ) = A @ A* and the remaining results follow similarly and
dually.
AA * A* A A* A * * *
!.! AA !A
*
If the word consist of k > I letters, let us insert (kl) alternating occurrences
of copies of the symbols ~ and ~', starting with either, the case of one single
insertion being (necessarily) allowed. For example:
* * * * *
A ~, A* A ~ A ~, A ! A ~ A* fil' A fil A fil' Afil!
Let us finally insert brackets in any arbitrary wa;y so as to make a wellformed
algebraic expression. For example:
A ~, A* ,'A* * * * *
 fil (! ~, A ) ; A* ((! 19 (.! fil' A) ) 0 (A ill' !)) 19 A
65
Any algebraic expression which can be constructed in this way will be called
an alternating ~A *product. Let us classifY these expressions as follows.
A 0 A* or A 0' A* or A* 0 A or A* 0' A
exactly according to the first two letters with separating operator, regardless of
how the brackets lie In the total expression.
For example:
A* 0 (~ €9' A
*.
) is of type ~*
A* is of type A*
((~ ~ (!:.* €9' ~)) 0 (~
*
0' !:.) ) <lJ A
*
is of type !:. @ A
*
Theorem 81. Let El be a preresiduated belt satisfYing axiom X12 ' and A an
arbitrary matrix over El • Then every alternating !:. ~*product f is welldefined,
and if f is of type .90, then f = g.
less than k letters. A little deliberation shows that the sixteen cases exhibited
in Fig 81 may arise.
For example, fl must be one of the six types defined above. Suppose fl lS of
type A. Then fl ends with an ~, so f2 must begin with an *
A. Thus f2 is of type ~* ,
*
type ~ @ !:. or type ~* 0' A. Suppose (to take the third case from the table for
illustration) that E2 is of type ~* €9 A. Then the first operator in f2 is €9 so the
operator between fl and f2 must be 0'. The number of letters in f = number in fl +
number in ~ = an odd number, so F is of type !:., since f l , and thus F itself, begins
with an A.
Now by an obvious induction hypothesis we can assume (for this same case) that
fl and f2 are welldefined matrix products and that fl = ~ and f2 = A* @~. Hence
by Theorem 85, f = fl €9' f2 is a welldefined matrix product and F = A. All the
•
other cases are treated analogously, using Theorem 85 or 86 as appropriate to
complete the induction.
* A @A*
! ! 1:1 @ 1:2
* A @' ! *
! A
(il'
1:1 ~
*
! ! 8! 1:1 @' ~ !
*
! ! 18' ! P
1 ®~ !
* *
! ! 1:1 @~ ! ®!
* @' * @'
! ! 1:1 ~ ! !
* A 18 A* 18' *
! 1:1 ~ !
* A @' ! * *
! 1:1 18 ~ !
*
!®! ! 1:1 18' ~ !
A @A* !@!* 1:1 @' !@!*
~
A 18' A* A 1:1 @~ !
A 18' ! * ! 18' !
*
1:1 @~ A 18' ! *
*
! * ®! !* 1:1 @' ~ !
* * @A *
! iii A ! 1:1 @'
~ ! @!
* * *
! @' ! ! 1:1 @~ !
* * *
! @' ! A 18' !
! @' ! Kl 6)~
Fig 81 TYPes of AA*  Product
Theorem 88. Let El be a preresiduated belt satisfying axiom X12 . The following
* @ (! 6)'
! !)
*
A 18 (! 18' !)
* ~ ! ~
(! 18' !) 18!
*
(! 18' ! ) 18! (! @ !*) 6)' ! 1
Proof. «11
Let A \.mn AInr • We have:
and X e:orl
•
(by LeDlna 83)
~~ * product. The theorem sets the pattern for products ~nvolv~ng an odd number of
letters, and Theorem 89 (below) sets the pattern for products involving an even
number. An ev~dent ~duct~ve argument provides an analogue of Theorem b7, but
we shall not carry this out because it involves yet another tedious exam~ation of
cases, and does not contribute much to the general theory. The following theorem
on quadruple products, however, does have practical application later.
For a given alternating A A* product P with k > 1 letters, let us define P(X)
  * 
to be the formal product obtained when the last (rightmost) letter (~or A ) is
replaced by an!, and f.[!] to be the formal product obtained when the first (leftmost)
letter (A or A* ) is replaced by an !.
Theorem 89. Let El be a preresiduated belt satiSfying axiom X12 , and A,~be
P
1
((A ij A ) * ij' A) @ A*
or f.3 A 8 (A * @' *
(A @ A))
SO K(!) is:
Kl (!) = ((A ® A * ) 3' ~) @ X
A* @' (A @ !) >, !
We remark in conclusion that the results of Sections 82 and 83 apply in
particular when El is a division belt or a blog, since by Theorem 82, division belts
and blogs are preresiduated; and axiom X12 holds for division belts (which have a
selfdual associative multiplication) and for blogs (by Proposition 42). In
particular, the results of Sections 82 and 83 apply if El is given the principal
interpretation, or is the 3element blog @.
Moreover, since El is formally identical tO~ll' the results of Sections 82
and 83 apply to El itself if we write scalars in place of matrices throughout.
•
gofog g
Then 1\ Ran g and g\Ran f are mutually inverse bi.jections.
Our matrix algebra abounds with such pairs of functions. For example, let
A .. M
 lmn and define f:.JI .... AI
~_ \np .rtmp
and g:AI ~.JI by: f(X)
....\mp 'lnp 
= A S X and g(Y) = A* S' _Yo
  
Then Theorem 89 imp11es that relations (83) are satisfied. Similariy if we define
* * * *
f and g by f(1£) =1£ 8' ~ and g(x,) =! 8' 1. , relations (8""3) are again satisfied.
The interaction of leftright symmetry with duality enables us to deduce many
pairs of such functions from Theorem 89, and it is convenient to have a notation in
which to express this succinctly.
Consider the diagram of Fig 82. We shall s~ that such a diagram is ~ if
the following are true:
Zl S and T are given sets
Z2 f : S .... T and g : T .... S are given mappings
69
·f
/'" ./
U V
~
S T
Fig 82 Valid Diagrams
Furthermore, if A
lmn ,and \fnr
 £.111 rIM\.np is a set of matrices, we define
A illY{
 np
= {A III X

I x£Vnp }
with similar definitions for A

fijI,,"np ,Wpm III A and

vrpm S' A.
Then the following result is an immediate deduction from Theorem 89 and Proposition
810. (The notation if ,etc., is used in Figs 83 and 84 in order that these
mn
figures ma;y be used again later in another context.)
Proposition 811. Let El be a preresiduated belt satiSfying axiom X12 . Then the
diagrams of Figs 3 and 8 are valid for arbitrary! £ , when the not ation
o
[~ ~l
1 +0>]
A 
X = [ 1 2
3 o 00
Define X= ! S!. Then X£ Ran f, where f:JJt 32 ....,\(.32 is defined by: f(!) = !::. III !.
;]
+0>
y =
[~ =] !*
[~ +0>
1
A*S'Y
n1 +0>
0>
!::.flI(!*S'X)
[~ =]
70
*
tw mp !I+! e' ! 1(pn
:\ I'
M'Unp
 linpm9'A
i' L .., .'.. ./
! 9' I * +II
*
!t+! 9! ~ ~ ~ 9' !. ~I+
* *
!!I+!!. 9!
*
IV mp !'I>! 9! Wpn
...
\ /'
~p if 9A
1'. ./ L.
~m/
! S *
I+i I
I *
!!. !I> ! 9' ! ~~~9! I' 11+ 19' A*
,
* *
!. ..... !. 8' !
A * ®' I = A * ®' (! ® ~) ~ X
In Section 73. we observed how the relationship between forward planning and
backward planning in a class of scheduling problems was reflected by the relation of
*
conjugacy !~!. With the notation of that section. suppose that the (nxn) matrix
! defines a process for which the vector of earliest permissible starttimes for
cycle nought is ,!. e: En' Then the vector of actual starttimes for cycle nought will
be some ~(o) >. ,!.. So the vector of earliest permissible starttimes for cycle one
will be ! €I ~(o). and the vector of actual starttimes for cycle one will be ~(l) >,
! 18,!.. By an obvious induction. the vector of actual starttimes of cycle r will be
~(r) where:
(91)
Suppose the process is due to run up to and including cycle N. and must terminate
at or before a given vector of finishingtimes~. Evidently, this requires that
AN+l ~ ,!. ~ ~ and we shall call the pair ,!..~ of vectors compatible for (A. N) if this
holds. A sequence {~(r)} (r=O, •••• N) will be called feasible for \~!Z!N) if we·have:
Lemma 91. Let El be a preresiduated belt which satisfies axiom X12 • If! mn'
,!. £ En and ~ £ Em' we have ~ ~ ! €I .!. if and only if !. ~ A 8' ~.
(by isotonicity)
A
~~ ..
n (!N
Hence !* €I' ~ >!N 18 x, and by iteration: !r* €I' ~:>; !N+lr 18 !. , (r=l, ... , N),
~ ~ !.
•
and !(N+l)* i'
Theorem 92. Let El receive the principal interpretation. Let A €~n and
~, ~ € En for some integer n ;:. 1. Let N ;:. ° be a given integer. Then the
following conditions are equivalent:
Proof. Conditions (i) and (iii) are equivalent by Lemma 91. Also (92) clearly
implies z>, t(N+l) >, A e teN) ~ ... ) AN+l e teo) ~ AN+l e x, so condition (ii)
implies condition (i)~ F~nallY, if~; AN+l ; x the: the s:quence {!(r)} defined
•
by: !(r) = Ar e ~ (r=O, ... , (N+l)) is a feasible sequence for (~,~,N), so condition
(i) implies condition (ii).
Theorem 93. With the notation of Theorem 92, any sequence t.(r) which is feasible
for (~,~,N) satisfies:
Ar <3 x ~ !.(r) ~ A(Nr+l)* 13' ~ (r=O, •.. , (N+l)).
(93)
We may now use (ii) and (iii) of (92), in (94) and (93) respectively, to give
(by isotonicity):
{t(!:)} (r=O, ••• , (!'l+l))which is :feasible :for (.!",!,N), which coincides with the
given sequence for r=O, ••• , (sl), and which satisf"ies t(.!!) = Z
Theorem 94. With the foregoing notation, a sequence {l(r)} (r=O, ... , (sl))
satisf'ying (95) has an (.!",!,N)  feasible rcontinuation if and only if:
Proof. The "only if" follows from Theorem 93 and (92). Conversely, if (96)
(Ns+l)
holds, then by Lellllla 91, ! 9 Z ~ ,!.
It follows, using (95) that the sequence:
(NS+l)
1( 0 ) , ... , 1( ),
sl z, ! €I Z, ••• , !::., 9 Z
Suppose, then, that Z satis:fies (96). The earliest starttimes for cycle s
are given to be Z, but how much later might we begin without prejudice to the target
vector of completion times,!? From Theorem 94, we see that the vector 1( s) of
starttimes for cycle s must satisf'y:
(Ns+l)*
z~1(s)~! 9'!, (97)
and that there will then be an (.!",!,N) feasible 1(s)continuation (i:f s < N + 1).
Hence a . . . A(Ns+l)* ""
compar~son of Z w~th _
camponentw~se ~ ,! tells us, for each ..
act~v~ty,
what the further dellliYs are which we can tolerate in initiating cycle s, beyond the
Z which is given. The vector of componentwise di:f:ferences between zand
!(NS+l)* 9' ,! is known in scheduling theory as the total float (vjctor).
(Ns+l *
However, we mlliY not wish to dellliY 1(6) until equal to!::., €I' ,!, thereby
using up all the total float. If s < N, therefore, we may ask: what delays can we
tolerate in initiating cycle s without prejudice to subsequent cycles  i.e. without
reducing the total float which will be available at cycle (s+l)? Now, if we begin
cycle s as early as possible, namely at times Z, the total float vector for cycle
(s+l) will be obtained by a comparison of!::., 9 Z with !(NS)* 8',!. Hence, if we
actually begin at times 1(s) instead of Z, this total float vector will not be
diminished provided! 9 1(s) does not exceed A 9 Z. But by Lemma 91,
Hence if we are not to reduce the total :float available at cycle (s+l), we must
choose 1(s) to satisf'y
(98)
rather than (97). And (98) is indeed a tighter inequality than (97) since:
A* 8' (! 8 Z) ~!* 8' (! 9 (!(NS+l)* 8' .!)) (by isotonicity and (97))
,,~(NS+l)*~, .!
So, from (98), a comparison of Z with A* ~, (A 9 y) tells UB for each activity how
much further del~ we can tolerate without changing the total float vector for
cycle s+l. The vector of componentwise differences between Z and!* 9' (~9 Z) is
called the free float (vector).
A"
[~ 1
3 ~l
By direct c alculat ion:
[~
5
':1
11
A2
[l 4
5
1] A4 ,;
11
8
10 10
x "
m z "
[::] N 3
And suppose further that cycle nought has been completed, cycle one initiated, and
that del~s and other practical problems dictate that cycle 2 cannot now be initiated
earlier than the starttimes given by:
z"
5
4
5
A
2*
9' z " ['
5
5
3
4
3
']
5
4
9'
[::] r:J
2 2*
Hence! ~ ~~ZS~ 9' z so by Theorem 94, the completion times .! are still
'
achievable. The total float rector is:
76
[;][;]=m
Hence cycl.e 2 of a.cti vi ties l.. 2 and 3 lIlII¥ be del.~ed by O. l. and 2 units respectivel.y.
beyond the given times z.. without prejudice to the compl.etion times .!.
Furthermore •
!~z.
[; l.
3
n· [:] [~l
[~l [:]
+""
'J
[ 3
Ii* 8' (!~) = +00 l. 3 ~
2 l. 2
Hence any del.~ in initiating cycl.e 2 for any activity. beyond the given times z..
wil.l. reduce the total. float at the next cycl.e for at l.east one activity.
10. RESIDUATION AND REPRESENTATION
(i) f 0 f * ~ iT
(102)
{ii) f* 0 f ::. is
where the orderrelations are respectively that in TT and that in SS, and iT E TT,
iS E: SS are the usual identity mappings. and 0 denotes composition.
f 0 (f* 0 f) ~ f ois = f
(f 0 f*)f ~ iT 0 f
0 =f
* * *
But evidently (f 0 f ) 0 f = f 0 (f 0 f). Hence f 0 f 0 f = f.
From this result and its dual we deduce using Proposition 810 the following standard
result of residuation theory.
Lemma 102. Let S be a given partiallY ordered set. Let KeSS be a given set of
residuated functions, and K* the set of residuals. Then (K,o), the composition
semigroup generated by K, consists entirely of residuated functions, the set of
residuals being (K,
~*
0), the composition algebra generated by K* •
.
Hence (flo f 2 ) is resl.duated, . resl.dual
Wl.th . f2* 0 f *l , and by an immediate induction,
:ach composition product of a finite number of functions from K is residuated. But
K consists exactly of those products; and the set of residuals is just the set of
* A*
finite composition products of elements of K  i.e. K • tt
We make now some definitions through which the concepts of residuation theory
can interact with those of our minimax algebra.
An ordered semigroup is by definition a semigroup for which all left multipli
cations and all right multiplications are isotone functions relative to a given
partial order on the elements of the semigroup. A residuated semigroup is an
ordered semigroup for which all left multiplications and all right multiplications
are residuated mappings.
We shall s~ that a belt is residuated if all left multiplications and all
right multiplications are residuated mappings relative to the partial order corres
ponding to the addition operation. Evidently, if (S,~,@) is a residuated belt then
(S, @) is a residuated semigroup.
By a residuated belt with duality, we understand a. belt with duality in which all
left multiplications and right multiplications are residuated, and all left dual
multiplications and all right dual multiplications are dually residuated, relative
to the partial order corresponding to the addition operations.
•
Hence ga is residuated and ga is dually residuated. Similar arguments apply to
right mult..iplications.
•
axiom X12 • Then Theorem 103 (with~ in the role of El ) implies that~nn is a
residuated belt with duality.
102. Residuomorphisms.
A relation such as: A @(x @ y) = (A @ x) aI (A @ y)
among certain elements A, x, y of our algebraic structures, brings together two ideas.
One is the idea of linearity, suggesting analogies with the concepts of linear algebra 
spaces, matrices, invariant subspaces, etc. The other is the idea of isotonicity,
implied by linearity as discussed in Section 23, and leading to the algebra of order
relations.
We shall be interested in a class of functions in which these two ideas interact
in a very specific way. Accordingly, let (S, @, @I), (T, ai, $1) be commutative bands
with duality. By a residuomorphism from S to T we shall mean a function f E ~ such
that there exists a function f* E ST whereby the following hold:
}
*
R2 : f o f* ~ iT and f of>, is
(ii)
*
¥ a, bE T, f (a @Ib) r* (a) *
al' f (b)
*
Evidently, R3 (i) and (ii) imply the isotonicity of f and f respect i vely, which with
* *
R2 shows that f is residuated with residual f • Hence f is unique.
It is convenient to have a succinct notation for displaying classes of residuo
morphisms. Accordingly, the notation:
M: S~ T: N
will always mean: (S, @, @'), (T @, @I) are given commutative bands with duality,
M is a given set of residuomorphisms from S to T, and N is the set of residuals of
elements of M.
If S=T, we shall sometimes call the functions f ~ M endoresiduomorphisms (of S).
The following result shows that a set of residuomorphisms can always be embedded
in a commutative band of residuomorphisms.
(V* ,m' ).
!:!:22!. Suppose g, h£ M. We may define g~h as in (2 2), and g* ~'h * dually.
We have for all t £ T;
((~h) 0 (g * m'h * ))(t) = g(g* (t)~'h * (t)) ~h(g* (t)~' h * (t)) (by definition)
~ t
i.e. (g~h) 0 (g*~'h * )~ iT
•
Finally, the bijection f ~ f * satisfies Nl and N2 of section 71, showing that
V and V* are conjugate.
The following theorem considers the case S=T, i.e.
M: S~ S: N
and shows that a set of endoresiduomorphisms can always be embedded in a belt of
such mappings.
_ ')_
(1 ( V,m,0 ) and ( V* ,m', S' ) are belts
(ii) M c:.V and N c:V*
(iii) V:S S:V*
~
(iv) * * *
V and V are conjugate , the conjugate of f £ V being its residual f £ V •
(v) If (M,~,0) and (N,~' ,S') are belts, then (v,m,s) is (M,m,S) and (V* ,~' ,~)
is (N,~',I8').
T *:
defined by T *: a 4 g * *.
a
Similarly, we may define the right dual regular representation of V. The left
and right, regular and dual regular, representations of V will be jointly known as
the regular representations of V. In the singular, the regular representation of V
will always mean the left regular representation of V.
going notation:
¥ a E V (ga 0 *
galla) = a (105 )
Theorem 108.
=1'1.11 = V).
b 0 (a* Ql' a)
82
(by (81))
•
Hence a ~ b and similarly b ~ a, so a =b and T is bijective. Similar arguments
hold ~or the other regular representations.
<AV,.',II').
(Av,.,e): (V,&,.')#V(.,&'): *
~~~~~.*
From (103) we see that ga fly and ga
*
AV satisty R2 o~ (104), and they
Proo~. £
,
£
(107)
'lleorem 1011. Let El be a preresiduated belt satistying axiom X12 , and let m,n~l
{~: En~Em:{~
~. Evidently, the elements
£1n,
or.{.{mn andZ:atn satisty the appropriate version
o~ ~ in (104). Also, i f !. then Theorem 88 implies:
83
*
so that gA is a residuomorphism, with residual gAo
•
m,n~l be given integers. are conjugate.
Proof. TIlis follows from TIleorem 1011, and parts (iv) and (v) of TIleorem 105. •
",*U *.
y> Al)* __ ~~
( ~l
Evidently, we malf p~raphrase Corollary 1012 by:
Now, suppose with the same notation that 1::., .!? £~ are such that gA = gB' Since
for any ! oJ(np the products I::. Ql !, .!? Ql ! are determined by the action of gA' gB
, whence:
all
Hence .!? ~ I::. and similarly I::. ~.!?. 'l'hus if' gA = gB we have I::. = B and the
mapping T:!'::. + gA is bijective. (The logic of the a;gument is essentially as in
Lemma 107 and TIleorem 108). Evidently, in the light of Theorem 1011 we have
established the following result.
•
has a fai thful representation as a endo
and hence orM for each integer p ~ l.
~np
In classical linear operator theor,r the asterisk is used to denote the set
HomE (E, El ) of all linear functionals, and also to denote the adjoint of a linear
1 n
operator. He show that in our theory :00 the asterisk has these sip:nificances.
TIleorem 1014. Let El be a blog. 'T'l,en for each integer n '" 1 we have:
84
J(ln = (J{nl) *
E*
n
En' the result follows.
instead of *
(E).
n *
This notation is slightly
•
ambiguous, and we must note that does ~ denote (E In'
Now let El be selfconjugate and let ..!., ~ e: En If we regard ~ as an
element of nl
product lying in
"1. then ~
* E:~ln and so the multiplication ~* ®i£.. is defined, the
El · Let us call this the inner product of ~ and i£.., written
)~ z(. The following results are analogous to familiar properties of complex
(pre) Hilbert spaces.
Theorem 1015. Let El be a blog and n ~ 1 a given integer. Then for each x E: En
there exists there exists
X £ En such that
(108)
~. For given ~ E: En' let (108) define g. Then we easily confirm that
g E: HO~l (En' E13' Conversely, let g E: HO~l (En' El ). Then by Theorem 510,
there is a u £Vll n such that:
On taking
Now if g E:
from E to E
~ = ~, *
Ho~
(108) follows.
(En' Em)'
sucfi that:
let us define the adj oint g of. g to be a mapping
•
m n
(109 )
g E: Ho~
1
(E , E)
n m
then the adjoint g of g exists uniquely and is its residual g • *
(1010)
*
.!. 9!9l.. (by (1010))
(!* 9' ~) * 9 y
T

((g(~)) *) T ((1::.* 9' ~ )* )
Hence g ~Y'
~ *Mmn*
"""'l and in fact g = g* by the proof of Theorem 1011. Since g
is unique it follows that g is unique if it exists.
)l'9'~' Z(
(1:/' ~,~/' ~ Z (by definition)
/' ~ (I::. ~ Z)
•
(by definition)
Hence g will always play the role of g, which accordingly always exists.
11. TRISECTIONS
112 . Trisections
Let (W, IB, S, . ' , 9') be a belt with a duality. We say that W has a trisection
if W is the union of three nonempty, pairwise disjunct sets a_, a~ and al<o such that
the operations in W satisfy the conditions set out in the table of Fig 111.
87
,
x y x fD Y x ~ Y x fD' Y x ~,
Y
For example, the first row of the table asserts: " if x £ ° and y£ 0_00' then ~
Proof. F'rom the first line of the table in Fig 111 we see that all requisite sums
and products exist within 0_00' which is therefore a belt with duality. Similarly 0,;,
and 0+00 are belts with duality. Again, the first, second, fourth and fifth lines of
the table show that all requisite sums and products exist with o_ooU06, which is
therefore a belt with duality. Similarly a,;, U a+ oo is a belt with dualit J,. •
For convenience in later manipulative work, we list the following facts which
follow selfevidently from the table in Fig 111. (See also Proposition 43).
Proposition 113. Let the belt W with duality have a trisection (o_oo'o¢,o_).
Then for all x, y, xl'· ••• ' y n £ W:
88
n
(iv) If i~l~ (xi 8 Yi) e: a+oo then at least one of
n
(v) Le (x. 8 y.) e: a_ oo then at least one of for each
•
i=l 1. 1.
Proof. Suppose if possible that ooe:a¢. Let q lie in the nonempty set a_ oo • We have,
using the table of Fig. 111:
•
This is similarly impossible. Since W = a_ooU a¢Ua+oo, we conclude that 00 E: a_ oo and
similarly
x = a e x e: a¢ ~ a_ooca,p.
But this is impossible since a_ oo ' a¢ are disjunct by definition. Similarly, if
x e: a...&.
b=xeb e: a+oo e a¢ C a+oo
This is similarly impossible. Since W = a_ooUa¢ U a+oo we conclude that
and that a¢ is convex.
x !!) k £ or/>
If b=xE&k, then b~; and OC06 (just proved). Hence we cannot have h.$X for any he: 06
otherwise by the convexity of or/> (Theorem 115), follows x£0r/> contradicting x£0oo
Hence x£Q, so °_~.
Now if y£0+oo and k£H=0r/> then by the table of Fig 111:
.
yE&'kw q,
I f c=yE&'k, then c~; and c£0q, (just proved). Hence y¢ Q. Hence QOo+oo is empty, and
evidently Ql)06 = Q()H is empty. But W = o~UOq,UCJ""", so QC:!oo' i.e. Q=CJ_oo ' Similarly
~.
+00
Corollary 117. A given subset H of a belt Ii with duality can be the middle of at
most onE' trisection of W.
•
Proof. As Theorem 116 shows, any trisection (CJ_oo,uq"CJ+ooJ wit;h CJr/>=H is uniquely
identified as (Q,H,R).
If W is a division belt or a blog, we shall say that a subset H of W is a
convex subgroup of W if H is a convex subset of Wand:
If 1V is a division belt then (H,~) is a subgroup of (W ,@).
If W is a blog then (H,@) is a subgroup of the group of W.
If a convex subg::oup H is a proper subset of W, then we say H is a convex proper sub
group of W (as will, therefore, always be the case if W is a blog).
Proof. Let the trisection be (0_00'0111'0_)' Note that, if W is a blog, then by Le=a
114, CJ 6 contains only finite elements. Suppose a,b£0q,' Then ~bl£o<p' For suppose
if possible that a®b1o_oo' Then a=(a®bl)®b£O_oo~OiCO_,~, a contradiction. Hence
1.1 . •
~b ~OOO and sun~larly a®b
1 ~O+OO.
.I
Hence a®b1 £O~ so (ai'
i i l ) ·~s a ( conv~x·) subgroup of
Corollory 119. Let the belt W with duality have a trisection (0_00'06,0+00)' If W
is a division belt, then (Oq"E&,~,E&' ,~') is a division subbelt of W. If W is a blog
then ({oo}UoiU{+oo},E&,@,E&' ,®') is a subblog of W.
•
PrOOf. Follows directly from Theorem 118, and the algebraic structures of division
belts and blogs.
If, additionally, H is a convex subset of W, then the sets P and Q can equivalently
be characterised by:
Proof. If W is linear then for all xe:W and he;H, x"h is true if and only if x~h is
false, so Q of (lll) and Q of (112) are the same. Similarly for R.
Suppose additionally that H is convex, and that :xe:Q of (U3). If X¢Q of (U2)
then for some lIEH, k>x is false, i.e. (by linearity), ~x. Hence k$x<h, whence Xe:H
•
by convexity, contradicting (113). Thus Q of (113) is contained in Q of (112), and
the converse is trivial. Similarly for R.
Lemma llll. Let H be a nonempty convex oLlbset of a given linearly ordered set W.
Then H, together with Q and R (of (U3)) form three pairwise dis.junct sets whose
union is W.
Proof. Let hE:H and :xl':W. If:xl':H then by (U2), xfQJR. Suppose :x4H. By linearity,
there holds exactly one of the relations.
x<h, x=h, x>h.
But x=h is inconsistent with xfH, hence (by (113)) x belongs to one of Q,R but
(by (U2)) not to both. •
Theorem 1112. Let W be either a linear division belt other than {¢}, or a linear
blog. Let H be a proper convex subgroup of W and let Q,R be as in (113). Then W
has the trisection (Q,H,R).
Proof. H is nonempty, since ~E:H. Hence by Lemma 1111, H, Q and R are pairwise
disjunct with union W. And Q and R are nonempty. For if W is a blog then ~Q
whilst ~E:R. And if W is a division belt then we can find XE:W such that x¢H, since
H is a proper subset of W. Then XE:QVR and it is easy to see that XlE:Q if and only if
XER (using Lemma 46). So W is the union of the pairwise disjunct nanempty sets Q,
H,R.
We must now verifY all 36 entries in the table of Fig 111, with Q,H,R in the
roles of o~, o~, o~ respectively. So let x, YEW.
Now, if W is a blog, and one of X,YEW is either ao or +." then (since ~EQ and
~R), the table of Fig 41 guarantees the success of the verification in all these cases.
We may restrict ourselves therefore to considering finite elements X,YEW, in other
words we mBiY argue further as for the case that W is a division belt.
91
We are left with the column headed x ® y, for which we must verify the top five
entries, the botton four being analogous to the top four. We take the cases separately,
our notation being that ql' q2 e: 0 and hI' h2 e: H, but all are otherwise arbitrary.
ql ® q2 < hI
But hI is arbitrary, so ql ® q2 e: Q by (112), whence 0, ® QCQ
l
2. By (112) , ql < hI ® h2 e: H. Hence by Lemma 411:
ql ® h2 < hI
•
4. The case BOO is similar to the case Q ® H discussed under 2.
Theorem 1113. Let W be either a division belt other than {¢}, or a blog. Then {¢}
is the middle of a trisection of W if and only if W is linear; and the trisection
is then (N,{¢},P) where N, P are respectively the negative and positive cone of W.
We give now a couple of examples to illustrate the above results. Let (J, 0) be
an infinite linearly ordered group, and let W be the set of 3tuples of elements of J.
We make W into a group be defining multiplication componentwise:
92
It is readily verified that (W,S) becomes a linearly ordered group, which we may
construe as a linear division belt (W,It,S,It' ,8') as in Section 28. Now define
H {(x,¢,\ll)lxEJ}.
ihen H is a convex proper subgroup of W. Defining Q,R as in (111) we confirm that
W has the trisection (Q,H,R).
Now let us take the same group (W,$) of 3tuples of elements of J, but this
time we take the ordering of (W,S) to be that of J3:
al ~ b l and a 2 ~ b 2 and a 3 ~ b 3•
121 . a¢asticity.
In Section 111, we posed two questions. In the present chapter, we answer the
second of these questions, deferring the first to Chapter 13.
An analogous result holds for matrices in minimax algebra; moreover the sum of
"stochastic" matrices is then also "stochastic". To develop these ideas, we require
the following definition.
Let (El,!il, @, !il', @') be a belt with duality and (o~,!il, @, !il', 0') a
subbelt of El with duality. We shall say that a finite subset S C El is
a~astic if there holds:
(121)
or doubly o~astic) if the elements in each row (respectively each column, or each
row ~ each columm) form a o~astic set.
A convenient terminology is: ao~astic where a may represent anyone of the prefix
words ~, ~ or doubly.
To simplify later terminology, we shall admit two special variants of the above
usage. Specifically, if is the group G of a blog then we shall say Gastic
as an alternative to o¢astic; and if o¢ is the trivial group {0} then we shall
say ¢astic as an alternative to o¢astic.
94
XES
L Ql x E a +~ if and only if S(lo
+00 i s nonempty.
XES
L Ql X E a _00 i f and only i f S n (o¢ U 0+00 ) is empty.
XES
L Ql X E o¢ if and only if S n a~ is empty but S n a¢ is
not empty. •
We notice en passant that aa ¢astic matrices are all particular cases of matrices
over the belt o_~U o¢, when El has the trisection (o_~, o¢, o+~).
Lemma 122. Suppose the belt El with duality has two t~isections (o~oo' o¢, 0+ 00 )
n
L '"
j=l'" 
{A} •.
~J
EO",
It.'
(i=l, ••• , m)
n
Hence: L ,. {A} .• EO", ( i=l, ••• , m)
j=l'"  ~J It.'
The following result is that adumbrated in the introductory remarks of this chapter.
95
Theorem 123. Let (o¢, $, ®, $', ®') be a r::onvex subbelt with duality of a belt
(E l , $, ®, $', ®') with duality, and for any integers m, n? 1 lett#" mnC mn
denote the set of all (mxn) ao¢astic matrices for a fixed meaning of a. Then the
(~'Jr ' o¢)homrep statement is true. In particular, this holds if o¢ is the middle
of a trisection of El , or if o¢ is the trivial belt {0}.
Proof. First, let a denote the prefix "row", and let !:.' C E vV;;,n. Then:
n
Vi=l, ... ,m, L"
j=l'"
{A $

cL .
 1J
¥i=l, ... , m,
(123 )
Now jIl$ {~}rj E o¢ by hypothesis, for all r=l, •.. , n. Hence u, v E o¢ where by
definition:
( 124)
Evidently, by (318):
Vr=l, ... , n ,
{A}.
 1r ® u ~ {A}. ®
 1r
I"
j =1'"
{B} .
 rJ
~ {A}.
 1r
®v
n p
Now, I $ {!:.}ir E o¢ by hypothesis, so (125) exhibits I,,{AIilB} ..
j=l'"   1J
lying between
r=l
two elements of o¢" Since is convex, we infer:
96
¥i=l, ... , m,
Hence A ~ B is rowo~astic.
Summing up, we have shown that !::..!il £ E . ) (ron' !::.. ~ A E J{ ron and !::.. ~ ~ E vV"rnp when
A C and Apart from some routine verifications,
 E ' Ar
vr mn , BE'

Ar
VV np
,
 vr mn
E' AI" A E 0 A..
'P
this proves the (iK. II¥", 0 ¢)homrep statement when a is "row". The proofs when
a is "column" or "doubly" are similar. The particular case when 0¢ is the middle
•
of a trisection fOllows now from Theorem 115. Moreover, the trivial belt {¢} is
obviously convex, so the result holds for this case also.
Question 2 from Section 111 asks: how can we characterise the set of left
matrix multiplications which take the space of finite ntuples into itself?
(E l , !il, ~, !il', ~') with duality, and for any integers m, n ? 1 let J{ ~ mn
denote the set of all (mxn) rowa ¢astic matrices and let 0 ~ J{ron denote the
set of all (mxn) matrices A for which {A}., E 01. for all i=l, ... , m; j=l, ... , TI.
 lJ 'P
Then the (tJ{,0, o¢) and (&, (:j, o~)homrep statements are true.
(!j ronCJ(mn for all integers m, n "l. So, (using the «(j,(J o¢)homrep statement),
(0ron' !il) is actually a subspace of (Wmn' !il) over o¢.
Let
A Ufron 
,II" , B
E E fj np for given integers m, n, p ~ 1. Then
where by definition:
u =
n
L!il'
r=l
v
r=l
n
L !il I '"
j=l'"
{B} ,
 rJ
(126)
Evidently, by (318):
n
(L (j) {~) ir ) ~ u ~ (127)
r=l
(128)
where g ; xl>A@x
A   
for each and denotes the set of ntuples all of whose elements
To answer Cuestion 2, we must prove the converse also (under appropriate conditions).
For this, we require the following definition. A subset T of a belt (E l , (j), @) will
be called rightcancellative or simply cancellative if there holds;
where 0 np ,0 mp are the sets of all (nxp) and (mxp) matrices respectively which have
all their elements in Then A is rowo¢astic.
98
By hypothes is, A €I B £ a
1.7 mp'
whence for all i=l, ... , m:
n
(I ill {~) ir) €I u {A€lB} .. £o
  lJ
(1211)
r=l
Hence A
Corollary 127.
is rowo¢astic.
Let (o¢, ill, €I, ill', €I') be a convex cancellative subbelt with duality
•
of a belt (E l , ill, €I, ill', €I') with duality and let A £ mn for given integers
m, n ~l. Then for each integer p? 1, the leftmultiplication gA
defined by gA B f+ A €I B satisfies:
(where 0,(JI
np Vmp
are the sets of all (nxp) and (mxp) matrices respectively which have
all their elements in o¢) if and only if A is rowo¢astic. In particular, this holds
when o¢ is the middle of a trisection of El .
Proof. The main result follows immediately from Theorems 124 and 126. The particular
•
case when is the middle of a trisection follows because is then convex and
cancellative, by Theorem 115 and Proposition 125.
Corollary 128. Let El be a blog with group G and for given integers m, n ~ 1
let vV' mn denot.e the set of rowGastic matrices over El , and G
n
the set of finite
ntuples over El • Then:
A €I x (V x £ E )} (1212)
   n
and if m = n then T is an isomorphism of the belt (\¥rill' ill, €I) to the belt
99
::f, Ar
~r=
c: Stab E
1
(G, G )
n m
(1213)
are combined. Generalising this notion, let El be a belt with duality having a
trisection (o_~, 06' 0+a»' and let x,yeEl , We shall say that the products x~ and
x~'y are Iundefined,if one of x,y lies in o_~ whilst the other lies in o~.
If ! ~ ! Iexists and is rowo ¢astic then! is rowa ¢astic and the elements of .!
form a o¢astic set
{A ®

xl. ' =
 ~J
ter ({A},
 ~r
~ {X} ,)
 rJ
Using the table of Fig 111 we conclude:
(i) If {!} ir e: 0+~ for some i ,r, then row r of .! does not contain an element
of 0_co (since! 3 !/exists), and then (~ 6 !} ij E 0+~ for all j for that
i.
(ii) If {!1 ir E 0_ for all r for some i, then no column of ! contains
an element of o+~, and then {~!,} ij E o_~ for all j for that i.
Similarly:
(iii) If {.!} rj E 0+_ for some r,j then {A 3 X} .. E 0i_ for all i for that j.
   ~J ~
Since all these conclusions contradict the hypothesis that! S ! is rowo r/Jastic,
we conclude (from (i) and (ii)) that! is row0r/Jastic and (from (iii) and (iv))
that the elements {X} .. form a o.rastic set. •
 ~J VJ
Lemma 133. Consider the following four conditions for given matrices !E
mn'
101
XE
n
over a belt El with duality, having a trisection (000,0 , o~):
(il' !~o!!r~
Then the conditions: (i) with (ii); (i) with (ii)l; (i)' with (ii); (i)' with
(ii) I; are all equivalent,
~ Since clearly (i)' implies (i) and (ii)' implies (ii), it suffices to show
that (i) with (ii) implies (i)' with (ii)', So suppose that! contains no element
of 0+= and that! ® ! Iexists and contains no element of o_~,
Now if for some i,j we have:
then by (iv) of Proposition 113, some {!}rj E 0+=, since no {!}ir E of«>' Hence by
(iii) in the proof of Lemma 132 we infer that every element of column j of {! ® !}
lies in 0+=, proving (ii)',
Moreover, if ~ is not rowo¢astic then some row i of ~ contains only elements
of o_~, since ~ contains no element of 0+00' But then by (ii) in the proof of
Lemma 132, that is also true for! ®!, contradicting (ii), This proves (i)', ..
We shall not bother to list the obvious duals and leftright variants of
Lemmas 132 and 133, since these lemmas are very specifically intended as stepping
stones for later theorems and have little interest in their own right,
f a ~ (&)1 (&)2
! 0' (!* ~
! * ~ (! til' 1£)
.!) Columndually 
Rowdually 
! *o
A 8' 1£
X 0_00
a 00
a

af<o
(.! 0 ! *)
(!~' !) 8 A*
8' !i Rowdually 
Columndually
X0A *
X 8' !
a 00 a

 a _00 at<><>
P a ~
Iexistence of the various triple and quadruple matrix products used in Chapter 8.
This question makes sense, of course, only for belts in which we can define both a
conjugacy and a trisection, and we must therefore first develop the necessary ideas
for this.
Accordingly, let (E l , @, 3, $' , 8' ) be a selfconjugate belt. We shall say that
El has a comEatible trisection (0_00 , 0"" °+00 ) if:
* *
° ., ° +00' i.e. if x £ 0+00 if and only if x ° 00 (131)
Proposition 134. A blog with group G, under the selfconjugacy (72) has a compatible
trisection ({oo}, G, {+<»}). A linear blog, under the selfconjugacy (72), and a
linear division belt, under the selfconjugacy (71), have the compatible trisection
(N, {1Il},p), where N,P are respectively the negative and positive cone.
* * *
Proof. Obviously!:::. and 1!. are conformable for both A 8 B and A 8' B and so the
products will both Iexist unless there are indices i(l ~ i ~ m), r(l ~ r ~ n),
s (1 .( s < p) such that {A *} . 8 {BL
 r1.  1.S
is not Idefined.
This will only happen if one of {!:::.* }ri' {1!.} is is in 0+00 and the other is in 0<» But
from (131), this means that {!:::.}ir and {1!.}is are either both in 0+00' or in 0_00' •
Lemma 137 Let El be a selfconjugate belt with a compatible trisection (0_00' 0,p' 0+00)
For A£ mn' each product!::. 8 !::.*,!::. 3' !::.*,!::.* 3!:::.,!::.* 8' A Iexists if and only if all
elements of !:::. lie in o¢, i.e. if and only if !::.£ ~mn'
Proof. Define B = A. Then A fails to have all its elements in if and only
if A and B both have an element of 0_00' or both have an element of 0+00' on
SOme one row; and by Lemma 136 this is necessary and sufficient that the products
A* 8!::., A* 8' A should not Iexist. Similarly for the other two products. •
The following theorem gives the tieup between Iexistence and 0,pasticity,
already adumbrated in Section 111.
A E·vlmn
Conversely, let

M and X E. M
 V'tnp
be such that A* 8' (_A 8 _X) is
Idefined. A 8! is necessarily Idefined, (being part of A* 8' (A 8 !)),
Then
and by Lemma 136 it cannot be that both A and A 8! have an element of at«>
or both have an element of a _'" in a:ny row i. Hence, making free use of the table
of Fig III, and the Iexistence of A 8!, we argue as follows. A cannot have a:ny
Proposition 139. Let S be a finite subset of a belt F.l with duality having a
n
Sn 0_00 is empty, but S 0t,6 is not empty.
•
105
We can now define rowdually, columndually, doubly dually and adually a pastic
matrices over El by obvious analogy to the definitions in Section 121.
It will be convenient to extend the terminology ~ ~astic matrix to allow a also
to represent the prefixes columndually, rowdually or doubly dually, and to
introduce the terminology ~~astic matrix, where a* represents the prefix ~
•
for given integers m,n>l. Then A is aapastic if~
From Theorem 138 we can now infer a number of variants by sUbstituting ~* for ~,
andlor dualisinc. All these cases are conveniently su=arised in tabular form, as
in the following proposition.
•
a~astic, i.e. if and only if f}. is doubly apastic. The other cases are handled
similarly.
Now Theorem 138, and its extensions Propbsition 1311 and Corollary 1312 state,
for given matrix ~, that f}. is aapastic if and only if a particular form of product
106
Iexists for ~ matrix!. The following two results enable US to specify lDOre
closely for what range of :matrices! such products do }exist for given taa¢astic)
!.
Theorem 1313. a selfconjugate belt with a compatible trisection
be given. Then the product !*i' !i!)
~~~~~~~~~~;~______~ElL
Eith.....
of elements of Le. ! E 0 mn •
Conversely, if ! does not contain any element of a~ then!* does not contain any
element of a_ oo ' so if ! 0 ! Iexists and also does not contain any element of a_
then product A*~' (A ~ X) is Idefined because the matrices are conformable, and have
elements in the belt a" U a+oo •
•
Lastly, i f ! E(;jmn' then! i ! and!* 0' (A. 0!) Iexist by Proposition 131
because the matrices are conformable.
•
only that the relevant matrices are conformable for the relevant products.
is valid for the combinations of ~, a, B, w2 in the table of Fig 131 provided only
that the relevant matrices are conformable for the relevant products.
Proof. According to Lemma 133, if ~8! I"exists and does not contain any element of
0"_00' then the conditions "! does not contain any element of 0" +x> " , "! is rowO"i6astic"
are equivalent. Hence the equivalence of Corollary 1314 and Corollary 1315 is
•
p~oved for the first row of the table of Fig 131 and the other cases are proved
similarly and dually.
is valid for the combinations of P,a, B, in the table of Fig 132 provided only that
the relevant matrices are conformable for the relevant products.
Proof. The product! 8 (!*8' (!II!)) Iexists i f and only i f the products ~8(~*8''JJ and
~*1I'(!8!) both Iexist (where! is !~!). B.Y applying Corollary 1315
to these two triple products, we see that we have just two possibilities, if the
quadruple product !fi!J(!*8' (!II!)) Iexists:
In case (ii), Lemma 133 applies and we infer that i f ! {I! contains any elements of
0"+00 at all, then it contains a complete column of elements of 0"+00. But this would
imply that !* 18' !, which is!* fi!J' (! {I !), would contain elements of 0"+00 also.
Hence! II ! contains no elements of either 0"_00 or 0"+00' i.e. in case (ii), ~ ~!
contains elements of O"~ only.
Conversely, if ! contains elements of O"~ only, then by Proposition 131, the
product ~ II (!* fi!J' (! ~ !) ),£exists provided the matrices are conformable. And if
! = A8X lexists and contains elements of 0" ~ only and ! is doubly 0" ~astic, then
! 8 (!* fi!J' !) Iexists by Proposition 1311.
This proves the corollary for the first line of the table of Fig 132 and the
proof for the other lines proceeds similarly and dually. •
134.1.DEfined Residuomorphisms.
As explained in Section 132, our aim in the present chapter is to investigate
how much of our theory of residuomorphisms, as developed in Chapters 8 to 10, can
108
1. What is the largest subset Of~p on which every doubly 06astic (mxn)
matrix induces a Idefined residuated leftmultiplication?
2. What is the largest subset of the set~ of doubly 06astic (mxn)
matrices, every member of which reduces a Idefined residuated left
multiplication on the whole OfJtp?
We use the notaGions of Section 104 and restrict ourselves as in Section 104 to
the case p = 1, i.e. to operands which are columns. The notation (0 q,) n denotes
the subset of En of all ntuples with all elements in o¢.
Theorem 1317. Let El be a generalised blog, having the trisection (0_00' °
For given integers m,n~l let~mn be the commutative band of doubly 06astic
matrices.
Then (il
Idefined.
(ii) If, under the given conditions, we could find ~£ S but ~ i (06)n'
109
we should have (!:~) i £ 0_ 00 Ua +00 :for some i (~i~n). If in fact {~}i £: 0+00
for some i (J..:;:~n), let A£:, AI'
 VY mn
be such that {/::.} li £: 0_ 00 but {~) rs £: °0
otherwise. Then /::. £:vttmn but A 0 u is not Idefined. And i f {~} i £: 0_ 00 for some
i (l,,<i.:'n), let /::. £: vVmn be such that {/::'}li £: 00' {/::'}lS £: 0_00 otherwise and
{/::'}rs £: °0 (r11) otheI~ise. We easily verifY that /::.* 0' ~ is not Idefined, where
and similarly if :!. £: T then
•
~ = A 0 U £: T. Hence if u £: S then
:!. d0 0 )m·
Let us now recall the notations of Section 84. Then from Theorem 89,
Proposition 810 and Theorem 1317 we immediately derive the following.
Proposition 1318. Let El be a generalised blog, having the trisection (0_00' 00'
If the notation ')./
~ ~
( etc) is taken to be synonymous with the notation 8' ~ (etc)
then the diagram of Fig. 83 is valid when A is doubly dually °0astic and the
diagram of Fig. 84 is valid when A is doubly 00astic, al] mUltiplications being
Idefined.
•
135, CJ mn As Operators
Theorem 1317 contains an asymmetry, in that part (ii) requires m, n > 1 whereas
part (i) only requires m, n >,.1. The reason for this is that vV"mn =0 mn if either
m or n is 1, and then all mUltiplications are Idefined by Proposition 131, If m, n > 1,
then emn is a proper subset of \, Ar
'mn
• Part (iil of Theorem 1317 tells us that
(o~)n is the greatest subset of En on which all multiplications are Idefined;
if we wish to enlarge the set of operands from (00)n to En we must therefore
reduce the set of operators, and the following result shows in fact that we must
reduce it from II\!'mn to E1 mn. This result thus anSwers question 2 of the previous
section, and also takes care of the case m =1 or n = 1.
Theorem 1319. Let El be a generalised blog having the trisection (0_00' o~, 0+00).
For given integers m, n ~ 1, we have:
FinaJ.l,y, :from Theorem Bn and Proposition 131 ( and its dual) we have the
following result.
•
As foreshadowed in Section 111, we have now related 091asticity to two apparentl,y
separate questions. From Theorem 126, we know that if the leftmultiplication gA
takes (0.s)n into (0.s)m then ! must be row0 91astic. By the duaJ. hereof, if the
residuaJ. iA is to take (o91)m into (o91)n' then !* must be rOWduall,y091astic.
And this isexactl,y the conclusion we arrive at if we demand that gA and g!
can be applied in combination without producing expressions where areiundefined.
Theorem 1321. Let El be a blog with group G, and for given integers, m, n ~ 1,
let denote the set of mxn doubl,y Gastic matrices. If
mn
then it satisfies:
Proof. The "if" follows :from Theorem 1317. On the other hand, by Theorem 510:
c ..jJ Ai
"\.."'tmn
n Stab E
1
(G, G )
n m
So, by Theorem 126, 1t consists of certain leftmultiplications gA where each !
is rowGastic. Hence, by Corollary 1012 and the uniqueness of theconjugate, ~ *
consists of the corresponding dual leftmultiplications g *A".
g* :BHA*9'B (132)
A  
•
14. THE EQUATION A 9 x = b OVER A BLOG
•
Proposition 141. The set of solutions of (141) is either empty or is a
commutative band.
In order to develop a nontrivial theory of solutions of (141), we must set
extra conditions on El  taking care always to include the principal interpretation
as a possible case. In fact, we shall base our analysis on the identities of
Chapter 8, and so shall usually take El to be preresiduated and satisfying axiom
X12 • Since we wish our results to have some practical relevance, we shall also
investigate the question of Iexistence of the expressions which we use, in the case
that El is a generalised blog  that is to say that:
(i) El is preresiduated
(ii) El satisfies axiom X 12
(iii) El has a compatible trisection (0_00 , °6, 0+",).
Lellllla 142. Let El be a belt with duality having a trisection (0_, o,P' 0+00).
Let !, y e:

fJJ (A).
~nr
Then the product 1:. 0) <! i ! ) is Idefined unless we can find
Theorem 143. Let El be a preresiduated belt satisfying axiom X12 • Then (141)
•
~ ti A* 0' (A 8 '=.) = A* 0' b = x
(0+00) IT (o_oo)n or (o$)n respectively according as case (i), (ii) or (iii) arises.
Proof. The 'if' follows from Propositions 131 and 1311, with Theorem 143.
Conversely, if A* @' b Iexists and is a Isolution of (141) then
A* @' £. £.9tnl(!) andA @ <A* @' £.) = b. Hence,
element of a , so A &
 
E1 ,
mn
and we have case (ii) of the present theorem.
Finally, every solution is a Isolution in castS (i) and (ii) by Proposition
131. In case (iii): every solution ~ satisfies ~,y where y = ~* .' ~ (Theorem 143),
so ~ It Y = Y with y e:(atl)n' whence)!. da_""U atl)n by Fig. 111. But! (being a<6astic)
also has all elements in a_""Ua tl , so! 8 ~ is Idefined in case (iii) also. •
For future reference, we record in the follow1ng result a number of dual and
leftright generalisations of Theorems 143 and 144.
Corollary 145. Let El be a preresiduated belt satisfying axiom Xl2 and let!, B
•
assumption of· the existence of solutions.) The other cases are proved similarly
and dually.
P a
Problem (141) is soluble if and only if ! 8 (!* 8' l) = l; and every solution 1\042)
is a Isolution i f ! 9 <!* 9' .!!) iell:ists.
Theorem 144 identifies the cases in which (141) has a idefined isolution.
All solutions are then Isolutions and the question arises: can we find all
solutions? Obviously, we require further assumptions on El before this is
possible, and our analysis will assume that El is a blog.
Accordingly, we now address the following problem:
To find all solutions of (141), given that El is a blog and that A 8 (A* 8' b)'
   043)
1
Iex is ts and equals l· J
Theorem 144 shows that three cases are relevant. Cases (i) and (ii) are
disposed of in the following result, which is an immediate application of
Proposition 43 and Fig. 41 (bearing in mind that A must be finite in cases (i)
and (ii».
Proposition 146. Let El be a b10g. If all elements of .!! are ~, then (143) has
l as its unique solution. If all elements of .!! are ~, then solutions to (143)
are exactly those elements of En which have at least one element equal to +~. 4t
Case (iii), which now remains, is the important case where! is doubly
Gastic and b is finite. In Chapter 15, we obtain all solutions of (141) for
this case, assuming that El is a linear blog. As a preliminary step, we consider
now the particular case that El is the 3element blog ®. Then.!!, being finite,
has all elements equal to ~.
Lemma 147. Let E1 be the 3e1ement blog a>, let! be doubly <rastic and l be
I/J
I/J
0
I/J
0
0
8
[J 0
0
(144)
(for under the given conditions such a product equals 0 if and only if all its
factors equal 0 (Proposi tion 43 with El ®)).
MUltiplying out. and using the fact that x ~ x = x = x $ x when x E ®. we have:
k
(145)
where each Pr is a product of some of the variables Yj' each product containing
at least one Yj' no Yj occurring to power greater than one in a given product Pro
and no two products Pr containing exactly the same selection of Yj's.
(Terms containing a coefficient ~ may be dropped. so that all coefficients in
(145) are equal to the multiplicative identity 0. which need not to be written.
Not ~ terms get dropped since a coefficient # ~ is present at least once in
each row of the Gastic matrix. Identical Pr's are amalgamated. since
Pr $ Pr = Pro) We now introduce the n functions Yj (j = 1 ••••• n) where:
Yj 0ifYj
~ otherwise.
Now. according to Lemma 147. no solution contains +~. But:
Yj ~ Yj =
}
00
y. 3 Y. = I/J
(j = 1. •••• n) if each Yj is either  or 0. (146)
J J
So. for given product Pr in (145) let Y • •••• Yn be the variables Yj which do
n l t
~ occur in Pro
From (146) we have:
Pr = Pr ~ (Y n if each Yj is either ~ or 0.
1
117
Clearly, there are exactly 2n different possible minterms, and each given
minterm is capable of assuming the value ¢ in one and only one way, namely by
giving each variable y.,
J
for _j = 1 ..• , n, the value 0 if y. itself occurs in the
J
minterm. and the value ~ if y. occurs in the minterm. For each of the 2n
J
different ways of assigning one of the values ~, ¢ to each Yj' exactly one of the
2n possible minterms takes the value 0. and the others take the value  .
Since Pr has been expressed as a sum of minterms, we now can express
k
r:l$ Pr as a sum of minterms, i.e. we arrive at a disjunctive normal form
[27J. In view of the law x $ x = x, we may assume that any given minterm occurs
at most once in the sum. Such a sum takes the value ¢ if and only if (exactly)
one of the minterms which occur in it takes the value 0. and this happens for a
unique combination of assignments of <» or 0 to the variables yj' The solutions
to (145) are therefore precisely those such assignments which make some minterm
k
in the disjunctive normal form of L $ Pr equal to 0.
r=l
$ (Y l 8 Y2 9 Y3) ¢
The lefthand side is in disjunctive normal form, and the solutions to (144) are
thus:
118
Yl Y2 Y3
f/J f/J (6
(6 (6 co
(6 co (6
co (/J (6
co co (6
15. LINEAR EQUATIONS OVER A LINEAR BLOG
1
a ij 3 Xj = b i through from the left by b i • obtaining as an equivalent set
of equations:
n
E <l .. 3 x. = ~ (i 1 •.••• m) (151)
j=l~ ~J J
In the matrix [<li)' we inspect each column. and we mark every element <lij which
is greatest in its column. At least one element per column will thus be marked.
For example. if El has the principal interpretation. let the given
equations be:
~] • [::]  [: J (152)
Lemma 151. Let El be a linear blog. Suppose there is some row i of the matrix
•
xJ < SJ {from (155»
We are left now with the case that in each row and each column of [ai}l. at
least one element is marked. We now transpose into a Boolean problem. as follows.
Introduce the Boolean variables Yj (j = 1 ••••• n) and the Boolean matrix [Y ij]
where Y1' J' = III if a1J
.. is marked in [a.].
1J
and y •• = '" otherwise.
1J
Now solve the Boolean equations:
n
I: .. y •• 8y.=0{i=1 ••••• m) (156)
jl'" 1J J
as described in Section 143. (From the way in which it was derived. ..J is
[y1J
clearly doubly Gastic.) We assert:
Each solution to (156) consists of an assignment of one of th~
values "'. III to each y.. For each solution to (156) we obtain a set
J
of solutions· to (151) as follows: (157)
For each j = 1. •••• n: if y ..= III then x. is given the value
J J
1 1
Sj ; if Yj  '" then Xj is given an arbitrary value such thatx j < Sj •
To illustrate the procedure. we derive for equations (152) the following
Boolean problem:
Introducing Y2:
(Yl ~ Y2) 1B (y 1 ~ (Y2 1B Y2» r/J
0
i.e. (Yl ~ Y2) 1B (y 1 13 Y2) r/J
with the solutions:
1 Y2
r/J '/J
r/J _00
Applying procedure (157) to these solutions, we derive for (148) the solutions:
Xl x2
1 1
81 82
1 1
81 <8 2
From (152) we have 8 1 = 3 and 8 2 9, so we have as the final complete set of
solutions:
1 xl = 3; x2 = 9
2 xl = 3; x2 < 9
_00
~
(At this point, it is perhaps of interest to note that for a negligible extra
effort we can solve equations of the form: (~13 ~) 1B E. = £. with ~, £. as before and
Clearly, unless E. ~ £.' the equations are inconsistent. Now
determine the columnmaxima Sj as before, but in transposing into the Boolean
problem, drop any equation for which Pi = b i since this equation is automatically
satisfied by all solutions in which Sj 13 Xj ~ r/J (j = 1, ••• , n).
At the same time, drop the E.column altogether, since if Pi < b i then this Pi
does not cons train the variab les.)
the admissible assignments such that for each i = 1, •.• , m, there holds x.
J
for at least one j such that " .. is marked.
1J
Proof. Since" ..
1J
~ 8 j for all i = 1, ••• , m and 1, .•• , n, we have:
122
a ij ~ Xj ~ Sj ~ Xj ~ 0 (158)
for all i = 1 ••••• m and j = 1 ••••• n and any admissible assignment.
Now suppose aiJ' is an lUlmarked element in the matrix [a . .]. and that x.
1.J J
receives some value under an admissible assignment. Then either x. = ~. in which
J
case a .. ~ x. < 0. or x. is finite in which case a .. ~ x. < 0 also. for if
1.J J J 1.J J
a .. ~ x. 0 could occur in (158) we should have:
1.J J 1
a ij = Xj ~ Sj' (from (154»
whereas a ij was lUlmarked. and therefore a ij < Sj'
Hence for every admissible assignment. we have:
a .. ~ x. < 0 if a .. is lUlmarked
1.J J 1.J
a .. ~ x. ~ 0 if a .. is marked
1.J J 1.J
)
Hence. i f in a given raw i there is at least one unmarked element. we have.
by the linearity of El :
Lit a .. ~x.<0
a.. unmarked 1.J J
1.J
for every admissible assignment.
So a necessary and sufficient condition for xl' •••• xn to be a solution to
(151) is that it be an admissible assignment such that:
L& a .. ~ xJ'
1.J
=0 (i = 1. • •.• m). (159)
a.. marked
1.J
Hence. since each a ij e Xj ~ 0. we infer that the solutions of (151) are exactly
the admissible assignments such that for every i = 1 ••••• m there holds
•
a ij e Xj = 0 for at least one such that a ij is marked. But a ij is marked. if
and only if a ij = Sj' and the result follows.
Now since Q) is a possible case for E l • exactly the same procedures and
reasoning apply to the Boolean equations (156). for which an admissible assignment
is just any of the 2n different assignments of the values 0. ~. to y. (j=l ••••• n).
]
We begin by marking the columnmaxima in (156). i.e. the 0's. But by the way the
1.J
J
matrix [y .. was created. we see that element y .. is 0 if and only if a1.'J' is
1.J
marked. Using this fact in the version of Lemma 152 appropriate to the case
El = ®. we infer for the Boolean problem (156):
Proposition 153. The solutions of (156) are exactly the assignments of the
values 0 or to the variables y. such that fOT every i = 1 ••••• m. there holds
•
~
J
y
j
= 0 for at least one j such that a l j is marked in (151).
We can now prove our main result.
Theorem 154. Let El be a linear blog. Then procedure (157) yields all solutions
admissible assignments, and that by Proposition 153 they ensure that for every
i = 1, ••• , m, there holds x. = S:l for at leas tone j such that a .. is marked.
J J q
Hence by Lemma 152 every assignment produced by the procedure is a solution to
(151) •
Conversely, if a certain admissible assignment is a solution to (151), let
1
us for j = 1, ••• , m give y. the value i/J if x. Sj in this assignment, and Yj
J J
the value _00 if x. < S:l in this assignment.
J J
From Lemma 152 and Proposition 153 we see that the resulting assignment of
values to the y.' s is a solution to (156) and it is easy to confirm that the given
J
solution to (151) belongs to exactly the class of solutions of (151) which arises
by applying the solution procedure (157) to the solution of (156) just created.
Hence the class of solutions generated by the solution procedure (157) is
exactly the class of solutions to (151).
Finally, it is clear that the solution procedure involves no duplication of
solutions, since two distinct solutions to (156) give rise under the solution
procedure to different combinations of variables Xj receiving the value Sj1 • •
be finite. Let e.. E Em have all m coordinates equal to 0, and let .£ E' mn be
defined by {cL. = {b}:l CiI {A} .. (i = 1, ••• , m; j = 1, ••• , n). Then the
 1J  1  1J
equations ~ CiI ~ = ~ and C ~ ~ = £ have identical solutionsets, and all solutions
are Isolutions.
Proof. Evidently, {C} .. = 00 if and only if {A} .. = _00, and {C}. is otherwise
 1J  1J 1J
finite. So.£ is doubly Gastic, and clearly p is finite. Hence all solutions are
Isolutions, by Theorem 144. n
Also, ~ CiI ~ = ~ i f and only i f E Et {A}.. CiI x. = {b}. (i = 1, ••• , m)
 1J J  1
j=l
and this clearly holds if and only if for i 1, ••• , m we have
n n
E «{b}.)l CiI {AL. CiI x.) = ({b}.)l CiI E .. ({AL. ~ x.) = C) = {p}.
•
j=ll  1  1J J  1 j=l'"  1J 1  1.
i.e. if and only if C CiI ~ = e...
Theorem 156. Let El be a linear blog, and let A E be doubly Gastic and
mn
b E Em be finite. Then a necessary and sufficient condition that the equation
124
!! 8 !. = .!!.. shall have at least one solution is that each row of the matrix
A 8 x a b shall have exactly one solution is. that each row of the matrix
on row i.
~. In view of Theorem 156. we may assume that C has at least one column
maximum on each row.
Now in terms of Lemma 152. the solutions of A 8 x a b consist of certain
admissible assignments. whereby certain variables receive the value 13: 1 and others
1 J
receive an arbitrary value <a . •
J
So if. for each column j in C we can find at least one row i in which only
1
{C} .• is marked. then Lemma 152 tells us that every solution hasx. (3j • so
 1.J J
there is only one solution.
Conversely. if for a particular column k in C there is no row i in which
only {'£}ik is marked then Lemma 152 tells us that all the assignments:
1
Xj = a j (j oJ. k)
1
~ ~ 13k
1
are solutions. for arbitrary ~ ~ 13k • so there is no unique solution. (This is
true even if El is ® since the a.' s are always finite. and so equal rJ if El is
•
J
Q). Hence "arbitrary ~ ~ a~l"
still implies a nonunique solution.)
We shall now give Theorems 156 and 157 another form, which will have
applicati~s in the immediately following chapters and also later When we discuss
canonical forms of matrices.
Let I> as usual be the multiplicative identity elemen·t. From Section 121
we recall the use of the term I>astic.
125
Theorem 158. Let be a blog and let A £ l/ mn be doub ly Gas tic and b £ E
m
be finite. Then a necessary and sufficient condition that the equation A@x = b
shall have at least one solution is that we can find n finite elements
m
x. L al'
({ A*} .. @' {b}. ) (j=l, ... , n)
J  J1  1
i=l
n
{b}. L ({A} .. @ xj ) (i=l, ... , m)
 1  1J
j=lal
m
( L ({A*} .. @, {b}.»l @ x. ¢ (j=l, ... , n)
al'  J1  1 J
i=l
(1510)
n
({b}.)l @ L ({A}. . @ xj ) '/J (i=l, ... , m)
1  1J
j=l al
m
( L f9' ({A*} .. @' {b}.»l
 J1  1
i=l
m
( L al' ({A*) .. @' {b}.»*
 J1  1
i=l
m
L ({A*} .. @' {b}.)*
i=lfll  J1  1
m
L «{b}.)* @ ({A*} .. )*)
i=lal  1  J1
126
m
= E e «{b}.)l 8 {AL.)
~ ~J
i=l
Hence the first relation in (159) gives:
«{b }. ) 1
~
8 {A}.. 8 x.) = 4>
 ~J J
(j=l • .... n)
A 8 x = b shall have exactly one solution is that we can find n finite elements
xl' •••• x such that the matrix B =I({b}.)l 8 {AL. 8 x.J is doubly 0astic
n  ~  ~ ~J J
and contains a strictly doubly 0astic (rt x n) submatrix.
~. The positions of the columnmaxima in a matrix are clearly unchanged if all
the elements in a given column are multiplied by the same finite constant x. since
(when x is fini tel the re ho Ids:
a > 6 if and only if a 8 x > 6 8 x
Hence for any finite ~ ••••• xn the matrix.!!. has its columnmaxima in
precisely the same positions as the matrix C  [({b}.)18 {AL].
  ~  ~J
So if finite xl' •••• xn exist such that.!!. is doubly 0astic with a strictly
doubly 0astic (n x n) submatrix. then the columnmaxima lie in £ in a pattern
which. according to Theorem 157 implies that the equation! 8 ~ • ~ has precisely
127
one solution.
Now let the columnmaxima of the matrix C = [({b}.) 1 13 {A}..J have the
  L  LJ
values Sl' ••• , S respectively (necessarily finite), and let x. = S:l
n J J
(j=l, ••• , n). Evidently the matrix:
B = [({b}.) 1 13 {A}. . 13
  L  LJ
x.JJ
then has all columnmaxima equal to 0, i.e. ~ is column 0astic.
If the equation ~ 13 ~ = ~ is uniquely soluble then, according to Theorem 157,
the columnmaxima lie in £ in a pattern which implies that B is doubly 0astic with
a strictly doubly 0astic (n x n) submatrix. •
Subject to m
E y ..
1J
1 (j 1, . .. , n)
i=l
(i, j) E a
and y ij ~ 0 «i, j) E a)
n
shall satisfy: E ~ij > 0 (i 1, •.• , m).
j=l
(i,j)Ea
Proof. The given problem is one of linear programming [3~. It has a finite
solution because it calls for the minimisation of a continuous function on a
compact set. Let {~ .. I(i, j) E a} be a solution.
1J
128
From the theory of linear programming we know that the dual of the given
problem is:
n
)
Maximise L x.
jl J (1514)
Subject to Xj " b.1.  a .. (¥(i. j) E a)
1.J
This problem has a certain solution ~l' •••• ~ which obviously satisfies
(1512). and by the theorem of complementary slacks [33] we have:
t .. (b.  a ..  TI.) · 0 (¥(i. j) E a) (1515)
1.J 1. 1.J J
n
Now. if we have L t ij > 0 (i = 1 ••••• m) then:
j=l
(i. j)f: a
\Ii a 1 • ••• , m, t ij > 0 and (i,j)Ea for some j(l ~ j ~ n)
(because t .. il 0). Hence from (1515):
1.J
\Ii  1, ••• ,.m, (i, j) E a for some j(l ~ j ~ n)
In other words. (I'll' •••• TIn) satisfy (1513) as well as (1512), Le.
are a solution to (1511) and thus to (141).
We remark that it is evident by direct inspection that (1514) always has the unique
solution mj = min (b i  a ij ). Theorem 1510 therefore gives sufficient conditions
that (141) shall have its principal solution !;,* e' b since for the principal
interpretation. component j of A* 18 b is just m.•
J
Given oM mn and
A E'",·t .£ e: ~ln' to find 1.. E v'~llm such that 1.. e !;, =.£ (1516)
Evidently we may develop an appropriate version of the theory presented in Chapter 14
and the present chapter, covering principal solutions. routines for findinF all
solutions, and criteria for solubility and uniQue solubility. However, the "row theory"
and "column theory" not only proceed along parallel lines, they positively interact,
as we shall now show.
First we require the following lemma. which formally records what is already implicit
in the proof of Theorem 158.
129
Proof. The first assertion follows from the fact that, in the proof of Theorem
158, there is established a logical equivalence (under the hypothesis of the theorem)
between:
•
(ii) xl"'" xn make ~ doubly ';astic.
Theorem 1512. Let El be a linear blog, let A £ be doubly Gastic and let
nn
b £
nl
and c £ both be finite. Then the equation A @ x = b has the
In
unique solution ~ c* if and only i f the equation ~@~ £ has the unique
solution ~  ~*.
Proof. From Theorem 159, Lennna 1511 and their "rowvariants", it follows that
the following condition is necessary and sufficient both for problem (141) to have
the unique solution x = c* and for problem (1516) to have the unique solution
Z b*.
•
 V'lnn
(i) Equation A @
~ = b has unique solution x = c*
Proof. Equivalence of (i) and (ii) was established in Theorem 1512. Condition
•
(iii) is obtained from condition (ii) by dualising and condition (iv) from condition
(i) similarly.
16. LINEAR DEPENDENCE
161. Linear Dependency Over El
               . : : . . L e t El be a given belt, and let ~EJ(mn and~ ~ En
be given. In Chapter 14, we looked upon the equation ~ 8 ~ = ~ as the matrix
eDilodiment of a set of simultaneous equations. There are, however, other ways of
interpreting ~ 8 ~ =~. If !f:...b(mn' then A has n columns, !.(j), jal, ..• , n each of
which is mtuple, Le. !.(j) € Em'
Notice that for m > 1, left linear dependence and right linear dependence over
El do not imply one another, unless El is commutative.
For let El be a noncommutative division belt, and let aI' c, x & El be chosen so that
c 8 x ,. x 8 c, and define:
a2 a l It
[b;
bl
b2
al 8
a2 9 :1 b
b2~
!.
[::1
(162)
We have
[::1 [~l It x (163)
hence £. & E2 is right linearly dependent on !. & E 2 • Suppose ~ were also left
linearly dependent on !.' i.e. for some y & El we had:
[::] y 8
[~] (164)
Thena2 8x=yo.a2
a l ltco.x=y It a l 8 c
131
Hence a l 8 c 3 x
Again, linear dependence and dual linear dependence do not imply one another.
Consider for example in the principal interpretation the following A E.A(33 and b e: E3:
10
Z 10
10°1 b
9 10
@
°8
Since no element in the third row is marked, the equations are inconsistent, by
Lennna 151.
It is evident that we can extend this example to Em with m > 3 by "filling" all the
mtuples with (m3) components equal to zero.
Perhaps surprisingly, the case m=Z does not always follow the general rule, as
the following result shows:
Theorem 161. Suppose El is a linear blog. Let ~ £ EZ ' and ~(j) £ E Z (j=l, ... , n),
all be finite.
Then b is linearly dependent on ~(l), ... , ~(n) if and only if b is dually linearly
dependent on ~(l) , ..• , ~(n).
Proof. Let AeJ{Zn have ~(l), ... , ~(n) as its columns, and let us begin to apply the
procedure of Section 151 to the solution of A 3 x = b. We bring all the righthand
sides to 0, and mark the resulting greatest elements in the columns of the matrix.
By Theorem 156, the equation A 8 ~ = E.. is soluble if and only if in each row, some
element is marked as greatest in its column. But this happens if and only if in each
row, some element is equal to the least in its column. And by the dual of Theorem 156,
this is a necessary and sufficient condition that A 13' x = b have at least one solution.
132
162. TheJl test. Let El be a given blog. Suppose we are given mtuples
!.(j) £ Em (j=l ••••• n) and we wish to determine. for each of them. whether or not it
is linearly dependent on the other (nl) nrtuples. Obviously we can do this by n
applications of the solubility criterion stated at the end of Section 142. with each
.,!(j) in tum taking the role of £..
The following theorem gives a more convenient mechanical procedure. Let AoJ/
....lnn
be the matrix having !.(j) as its jth column (j=l •..• , n). Define a matrix,Jl £.111
 \nn
as follows:
~
{4}. . (i=l •..• , n)
 11 (165)
{4}. .
 1J
{!* 8' A}. •
 1J
(i=l, ... , n; j=l, ... , n', ilj)
In other words. ~ is the matrix !* 8' ! with its diagonal elements overwritten by
 00' s. ·We now compare each column of A with the corresponding column of ! €I '!. and
make use of the following theorem:
Theorem 162. Let El be a blog. Let the matrix At have columns !.(j) E Em
0=1 ••••• n\2), not necessarily all different. Then for each j=l •.• , n, the
jth column of ! 8 ft is identical with !.O) if and only if .,!{j) is linearly dependent
on the other columns of A. The elements of the jth column of 1.. then give suitable
coefficients to express the linear dependence.
In other words, !.(j) is linearly dependent on the other columns of !. and the
coefficients of the relation are J'ust the elements {A}k' of the j th column of .A.
 J 
Conversely, suppose some column of ! is linearly dependent on the others. For
simplicity of notation and wi thout loss of generality, suppose .!(n) is linearly
dependent on the other columns.
Then! 8!. = .!(n) is soluble, where matrix B has .!(l), ... , !.(nl) as its columns.
Hence. by Theorem 143:
*
B 8 (! 8'!.(n» = !.(n). (167)
133
c.
c
J
n
{~ * @' a(n)} .
 J
n
(j=1, ••• , (n1»
) (168)
Then A @ c = I
j=l@ 
aU) Ol c.
J
nl
L ~(j) Ol c.
J
(s inee c
n
00)
j=l
Moreover, en {A}
 nn
Hence .<:. is precisely co lumn n of A.
m ~(4)
m (1610 )
*
[1
A 0' A = 3 3 2
3
2
3
4
2
1
2J
5
5
3
0'
[:
4
5
2
5 :]
134
1 1
[: 3
2
0
0
2
0 'l
3
2
0 1 0
Whence:
~]
1 1
~
[=:
2 3 (1611)
3 0 2
2 0 1 ....
!9&
[ : 3
5
4
2
2
3 :] (1612)
Comparing (1612) with (1610). we see that ~(2) is linearly dependent on ~(1). ~(3).
~(4). From (1611) we see that the coefficients are 1. O. 0 respectively. Checkipa:
2
2
5
~l [~:l
3
II
0
r:l~ax ~~: ~: ~~J
(3. S. 3)
[:J ~(2).
5
Obviously. the j th column of A does not give the only possible expression of the
linear dependence of ~(j) on the other columns. since in general the equation
! 8 ~  ~(j) does not have a unique solution. The~procedure in fact provides the
principal solution in each case (Section 142).
The following corollary defines the dual form of the t..test.
Corollary 163. Let El be a blog. Let the matrix !E~mn have colums ~(j)
(j=l ••••• n ~ 2). not necessarily all different. Then for each j=l ••••• n. ~(j)
is dually (right) linearly dependent upon the other columns of ! if and only if
~(j) is identical with the jth column of ! 8' 111_. where!1. is formed by overwriting
the diagonal elements of !* 9 ! with +00; and the elements of the jth column of fA'
then give suitable coefficients to express the dual linear dependence.
The following extension of the~test has some interest. Suppose ~(l) ••••• ~(r)
•
and ~(r+l) ••••• ~(n) are two given sets of ~tuples. For each mtup1e. we
wish to determine whether it is linearly dependent on the mtuples in the other set.
We define.4 J
 e:.'Inn as follows:
135
It is then readily seen that a comparison of each ~(j) with the corresponding column
of A 0 A furnishes the required criterion.
Theorem 16 4. Suppose E1 is a blog other than G) Let m > 2 and k > 1 be
arbitrary integers. Thenwe can always find k finite mtuples, no one of which is
linearly dependent on the others:
Proof. Assume first of all that m = 3, and let x ) ¢ be an arbitrary finite element.
Then:
By applying the~ test we shall show that each is linearly independent of the others.
Defining! and ~ as usual, we find by direct computation:
if
{A} .. = ~(i) 0' ~(j) (i,lj)
 l.J
(ijI j)
ifi<jl
(1615)
if i > j j
Hence fA}, , < ¢ for i ,I j and {JI}l.'l.' 00 by definition.
 l.J
For given j (1 ~ j ~ k):
k
{! 0~)lj J11t ({~)li 0 {~}ij)
k
i~l$ ~}ij (since {~)li f/J (i=l, ... , k»
k
00 It L $ l.4'} ..
i=l  l.J
i,lj
136
(1616)
Hence the first element(x~ in each column of ! 9 ~ is different from the first element
(rJ) in the corresponding 3tuple in (1614). Hence by Theorem 16 2. no one of these
3tuples is linearly dependent on the others.
For m > 3. the result follows by extending the 3tup1es in any arbitrary way by
(m3) extra rows. •
The assumption that El "@
is necessary in the above theorem in order that we can
1 2 r:;'\
find 0 < x <  . (so that x • x •••• are all different). But even if E1  0 we
can produce a dimensional anomaly by using infinite elements. as the following theorem
shows.
Theorem 165. Suppose El = Q) • Let m > 2. Then we can always find (at least)
(m2 m) mtuples. no one of which is linearly dependent on the others.
Proof. Consider the set S of (m2 m) mtuples each of which has exactly one component
equal to ~. exactly one component equal to ~. and all other components equal to 0.
Suppose ~. ~(l) ••••• ~(k) E S are all different and that. if possible:
k
~ = I e ~(r) 0) Ar with Ar E E1 (r=l ••••• k) (1617)
r=l
In the following we make free use of Proposition 43.
Since ~ has at least one finite component. not all Ar (r=l ••.. , k) can be ~.
Evidently we may suppose that from (1617) we have dropped all terms in which Ar is
~. so Ar 0 or Ar = +~ in (1617). But if some Ar = then that ~(r) 9 Ar has
(since ~(r) E S). (ml) components .... j but ~ ~ ~(r) 9 Arfrom (L617} whence ~ must have
at least (ml) components +~. contradicting ~ E S. Hence every Ar = 0 in (1617).
whence:
k
~ = I e ~(r)
r=l
Using Proposition ·43 it is now clear that ~(1) ••••• !.(k) all have their  in
the same coordinate position as ~ does. and that one of !!,(1) ••••• ~(k). (say!.(j»
But this means that ~(j) • ~ contradicting
•
has its +~ in the same position as ~ does.
the assumption that ~ was different from !.(l) •.••• !.(k).
Hence no linear dependence can exist.
Since every blog El contains (l) isomorphically. it is evident that the dimensional
anomaly in Theorem 165 can be extended to every b10g.
Theorems 164 and 165 both assume m > 2. The following result shows that
the situation when m  2 is much more intuitive. at least when El is linear.
137
[::]
(r = 1, ••• , k)
Since E1 is linear we can choose to number these 2tup1es in such a way that:
(1618)
AS (!* 9'
[:::] )
[~:l
Hence, the equation:
AS z =
[~:l
is soluble for each r=l, ••• , k, or in
other words, each [:::J is linearly
dependent on
138
Corollary 167. Suppose El is a linear b~og. Let there be given a set S containing
k > 2 finite 2tuples. Then we can find ~, ~ £ S such that each 2tuple in S is both
linearly dependent and dually (right) linearly dependent on ~,~. Moreover, if El
each 2tuple in S is also left linearly dependent and dually left linearly dependent
on ~,~.
Proof. Follows directly from Theorems 161 and 16 8, and the property of commutativity . •
164. Strong Linear Independence. In conventional linear algebra, a number of different,
but logically equivalent, definitions are possible of the notion of linear independence
of a set of elements of a vector space. However. in (47) • we have formulated for
minimax algebra analogous definitions of various alternative forms of linear inde
pendence of elements of a bandspace, and shown that they are ~ logically equivalent,
although certain logical implications may be demonstrated among them.
It is clear, then, that we must take care what definition we employ if we hope to
develop a theory of rank and dimension applicable to the principal interpretation. For
example, the results of the previous section show that the apparently logical step of
defining linear independence as the mere negation of linear dependence does not lead
to a satisfactory theory of dimension. These considerations motivate the following
definition.
Let El be a blog and let ..!!o(l), ••• , ~(k) £ En (k>.l). We shall say that
..!!o(1), ••• , ~(k) are strongly linearly independent if there is at least one ~
t
b = ( L IB
r~l
~(jr) 6) Ajr)
(1619)
with A. £ El (rsl, ••• , t) , 1 ~ jr ~ k (r=l •••• , t)
Jr
Lemma 166.• Let El be a blog with group G. Let ~(l), ••• , a(k), ~ £ En (k >, 1)
be such that ~ is finite and has a unique expression b~ the form (1619). Then t=k;
jl=l •••• , jt=k; Ajr £ G (ral, ••• , t); and! is doubly Gastic, where ! £~k is the
matrix whose coluDDls are ~U),. .. , ~(t) in that order.
we may rearrange (1620) to give another expression for.£. in the form (1619),
contradicting the uniqueness of the given form. Hence t=k, and jl=l, ... , jt=k.
Moreover no A. can be 00 in (1619), for if k=l this would contradict the
J
finiteness of 5, and for k>l we could drop the particular term ~(jr) ~ Aj
r
from (1619), to give another such expression for .£., contradicting uniqueness.
Hence no component of any of ~(l), ... , ~(k) is +00, otherwise the same component of
b would be +00.
Again, none of ~(l), ... , ~(k) has all its components 00, for if k=l this would
contradict the finiteness of b, and for k>l we could drop that term from (1619) to
give another expression for .£., contradicting uniqueness. And no one component can
be 00 in all of ~(l), ... , ~(k) else the same component would be 00 in.£., contra
dicting the finiteness of b.
Hence A is doubly Gastic, and so A* 0' b is finite. Hence by Theorem 143,
each A. < +00 in (1619) and so each A. is finite.
Jr Jr
The following is a simple, but useful, reformulation of some of the information
in Lennna 16 8 :
Corollary 169. Let El be a blog and let ~(l), ... , ~(n) E Em for given integers
m,n~l. Then ~(l), ... , ~(n) are SLI if and only if there exists a finite mtuple
For a given belt E l , define linear indej2endence as the mere negation of 1 inear
•
dependence: ~(l), ... , a(n)£ E are lineaHy independent exactly when no one of them
 m
is linearly dependent on the others. I t is natural to enquire how linear independence
relates to strong linear independence.
Theorem 16 In. Let El be a blog and ~(l), ... , ~(k) £ En' For ~(l), ..• , ~(k) to be
linearly independent it is sufficient, but not necessary, that ~(l), ... , ~(k) be SLI.
Proof. If ~(l), .•• , ~(k) are SLI then there is a finite.£. & En with a unique
expression (1619) in which, according to Lemma 16 8, all of ~(l), •.. , ~(k) occur.
So if one of ~(l), .•. , ~(k) were linearly dependent on the others it could be
substituted out of (1619) to give another such expression for .£.' contradicting
uniqueness.
However, for any linear blog El consider ~(l), ~(2), ~(3)
~(l) ~(2)
[~] ~(3)
140
On the one hand, the II test shows (Theorem 162) that ~(1), ~(2), ~(3) are
linearly independent. On the other hand, for any finite b £ E3 define the
matrix C ({bLr 1 {a(j) 1. i.e. :
 1  1
({£)1)1 ({£)1)1
({£)2)1 ({£)2) 1
C
({£)3)1 ({!?)3)1
Obviously, £ has the property that each columnmaximum occurs twice on some row
showing by Theorem 157 that the equation !0~=b is not uniquely soluble, where! is the
matrix whose columns are ~(1), ~(2), ~(3). Hence by Corollary 16 9, ~(1), ~(2), ~(3)
We conclude this chapter with some definitions giving duals and analogues of •
the concep t SLI:
When we wish to emphasize that the coefficients A mUltiply from the right in
(1619) we shall say right SLI rather than merely SLI; and by obvious analogy we may
define the concept left SLI.
I f we replace the displayed formula in (1619) by:
t
b = ( I 9' a (j ) 13' A. )
r=l r Jr
then the definition, so modified, will be taken as that of the concept right dual SLI,
with an obvious analogous definition for left dual SLI.
17. RANK OF MATRICES
•
A is left rowregular if and only if there exists a finite ntuple c £ In such that
the equation ~@~=~ is uniquely soluble.
Theorem 172. Let El be a linear blog and let ~ s./{un for given integer n 3 1. Then
the following conditions are equivalent:
If anyone of these conditions holds then ~ is doubly Gastic (where G is the group
of E l ). Aod if El is commutative then all four conditions are equivalent.
Proof. Suppose (i) holds. Then by Corollary 169 there exists a finite ntuple
b s En such that the equation ~@~=~ is uniquely soluble. Aod by Lemma 1610, ~ is
doubly Gastic. Let ~* s~l be the unique solution of the equation ~@~=~. So
~ sJY(ln is finite, and according to Theorem 1512 the equation :t..@~=~ has the unique
solution :t..=~*. Hence by Proposition 171, ~ is left rowregular, so (ii) holds.
The converse follows similarly. The equivalent of (iii) and (iv) follows from
•
the equivalence of (i) and (ii) for the transposed A' of A. The equivalence of all
conditions when El is commutative is then trivial.
By dual arguments, we can prove:
M .
Proposition 173. Let El be a linear blog and let As "'lnn The the following
conditions are equivalent:
When El is a linear blog, we shall use the single terms regular to cover (i) and (ii)
of Theorem 172 and dually regular to cover (i) and (ii) of Proposition 173.
equal to r. The epithet right will be dropped when it is not needed for emphasis.
Similarly we may define left columnrank. We define the right rowrank and left row
rank of A as the right columnrank and left columnrank respectively of A', the
transposed of A. Finally, we make dual definitions of all these ranks in the obvious
way.
Before proving relationships among these ranks, we need one more definition. Let
_AI has 0astic rank equal to r (integral) if the
us say that a given matrix A "''(mn
following is true for k=r but not for k>r:
There are x £ En and 'i... £ Em' both finite, such that ~ ~mn is doubly 0astic
and contains a (kxk) strictly doubly 0astic submatrix, where
Proof. If A had any row or column consisting only of oo's, or containing +00, then, from
(171), so would B. But B is doubly ¢astic. So A is doubly Gastic (by Lemma 121).
Now suppose without loss of generality that B contains an (rxr) strictly doubly
0astic submatrix in its first r columns. Let D £

11
"lmr
consist of the first r columns
of A. Arguing as above, it is clear that D is doubly Gastic. Applying Theorem 159
and Corollary 16 9 (with E. in the role of !y we infer that the columns of Dare SLI. t t
Lemma 176. Let El be a linear blog with group G, and suppose that A£~mn is doubly
Gas tic and contains a set of r columns which are SLI. Then A has 0as tic rank
equal to (at least) r.
Proof. Let D

J{mr have as its columns the first r columns of A, assumed (without
loss of generality) to be SLI. By Lemma 16 8, E. is doubly Gastic. Applying
143
Theorem 159 and Corollary 169 (with £ in the role of !::) we infer that there exis t
finite elements Xl"'" xr and Yl"'" Ym such that ~ E~r is doubly ~astic and
contains an (rxr) strictly doubly 0astic submatrix, where:
m
i.e. x~l
J
L '" (y.1
i=l'"
8 {A}. .)
 1J
(j=r+l, ••• , n) (174)
Since El is linear, the maximum is attained for some i. (l~i .<m) for each j =r+ 1, ••• , n
J J'
on the righthand of (174) and we may write:
1
x. y. (i=l, ... , m; j=r+1, ... , n) or equivalently:
J 1.
J
o = Yi.J 8 {A} . .
 1j J
I)} x. ~ Yi S {!}ij
J
I)} Xj (i=l, ... , m, j=r+l, ... , n) (175)
Now define! as in (171) using Xl"'" xn and Yl"'" Ym as in (172) and (173).
Then the first r columns of ! (which constitute the matrix S) are ~astic by (172)
and the remaining columns of! are ~astic by (175). Moreover the rows of Bare
~astic because they are obtained by adjoining 0astic columns to the row0astic
•
matrix g. Hence! is doubly 0astic and contains by (172) a strictly doubly ~astic
Theorem 177. Let El be a linear blog with group G and let! ~ be doubly Gastic.
Then the following statements are all equivalent:
Proof. The equivalence of (i) and (ii) evidently follows from Lemmas 175 and 176.
The equivalence of (i) and (iii) follows by analogous reasoning, noting the rowcolumn
symmetry of condition (171). And (iv) and (v) are duals of (iii) and (ii) respectively . •
144
In view of Theorem 177, we may (for doubly Gastic A) simply use the expression
rank of !. for any of the ranks (i) to (iii) and dual rank of ! for the corresponding
chal quantities. Thus Theorem 177 asserts the equality of the rank of A with the
dual rank of A*.
Corollary 178. Let El be a linear blog with group G and let !e:.Jtmn be doubly
G.astic. Then the following statements are all equivalent:
Moreover, if El is commutative then statements (i) to (iv) are all true if and only
if ! has rank r.
~. The equivalence of (i) to (iv) follows from Theorem 177 applied to !', the
transposed of!. Moreover, if El is commutative then for given xl ••••• xn and
yl.···. Ym we have:
= x.J 6) {A'} ..
 Jl.
6) y.
l.
(iI •...• m; j=l ••••• n)
•
It then follows from (171) that A has ~astic rank r if and only if A' has ~astic
Theorem 179. Let El be a linear blog with group G and let ! E~' Then there is an
integer r such that! has l1Iastic rank r. if and only if ! is doubly Gastic. And r
is then an integer satisfying ~<r,min (m.n).
If s=l then some column of !. say .!(l). is finite. The equation .!(1)8x  .!(l)
then clearly has the unique solution xf. Hence A has columnrank at least equal
to one. and so has ~astic rank at least equal to one. If s>l then sane s columns of
!. say .!(l) ..... .!i(s),have the property that they. but no proper subset. constitute
a doubly Gastic (mxs) matrix. Let £ be the matrix [.!(l) ••••• .!(s)].
(176)
In (176), the first s rows of C constitute an (sxs) "diagonal matrix" having finite
elements gl"'" gs on the main diagonal and ~ elsewhere; and D is evidently rowG
as tic.
1 .
Define fEEs by {f}j = gj (J=l, ..• , s). So f is finite, and so is D Ql f
since D is rowGastic (Section 122). The equation:
(177)
clearly has the solution ~ = f; we assert that this is the only solution.
For any other solution l. of (177) must differ from x in at least one coordinate
1
position: {l.}j '# gjl for some j (l:}:;s). Since El is linear, either {l.}j > gj or
{v}. < g:l. But then {C @ vL > f/J or {C 13 v}, < f/J respectively, and l. is not a
"J J "J "J
solution to 177.
Hence (177) has a unique solution, implying by Corollary 16 9 that the columns
of Care SL1. So A has columnrank at least equal to s, and so has f/Jastic rank at
least equal to s.
Hence each doubly Gastic matrix ~ E:~ has f/Jastic rank q1; and since r must,
•
by Theorem 177, represent a possible number of rows and a possible number of columns
of A we infer that r$min (m,n).
We are now in a position to show tpat the dimensional anomalies which, according
to Theorems 164 and 165, arise in relation to linear dependence, are avoided in
relation to strong linear dependence.
Theorem 1710. Let E1 be a linear b10g, and n~l a given integer. Then for each
integer m (4~n) we can find m ntup1es ~ (j) E: En (j 1, ... , m) which are SLI; but
this is impossible for m>n.
Proof. For m:;:n, define a matrix A E:},f whose first m rows constitute the "unit matrix"
 . . .lmn
1m having diagonal elements equal to f/J and offdiagonal elements equal to 00; if m<n,
let all other elements of A be equal to 0.
146
On the other hand if given ntuples ~(j) £ En (j=l, ... , m) are SLI then the
•
matrix! = [2(1), .•. , .!!.(m)] has columnrank r"lIl, and so has ~astic rank r=m. But
by Theorem 179, r<[n. Hence ~n.
18. SEMI NORMS ON En
By making use of the following relations for all A e:~n; ~,y" e: En; A e: E1 :
A3(x3A) (A3x)3A
we infer:
Proposition 18L If E1 is a given belt then the right columnspace of any matrix is
a right bandspace over E1 . •
Evidently, we may interpret the diagrams of Figs 83 and 84 as presenting systems of
is tone and anti tone bijections among the various row and columnspaces of a given
matrix!! and its conjugate !!*.
Now suppose E1 is a belt with identity r{J (axiom X8 ), and let 0 e: E1 satisfy ~r{J.
The columnspace of In(O) we shall call En(O). Referring back to Theorem 5B
and the first paragraph of its proof, we infer the following result.
Proposition 182. Let E1 be a belt. If, for n>lJ~nn satisfies axiom X8 , then E1
sat1sf1es aX10m X10 . e identity element of 1S then 1m 9 , where 9 is the (unique)
null element of El' and Em (9) = Em' for all integers m~1. •
148
T(X) =x e x* (185)
is an Elseminorm on El •
(x$x~) e (y$y* )
The seminorm defined by (184) may be called the absolute value seminorm; for the
principal interpretation, indeed, it yields exactly the absolute value Ixl.
•
By taking each 'to in Proposition 184 to be the absolute value seminorm on El and ~
to·be the summat~on f e' we derive the absolute maximum seminorm on ~n :
i=l
*
.Ln
1.=1
e{~~} i (187)
Theorem 186. Let El be a linear selfconjugate belt. For given integers s,t,~l
let the given integers il~s,1l~t satisfy l~i~n, l~j~n. Let ~£(El) t+l
be isotone. ThenT is an Elseminorm on En' where for each x £ En:
(1810)
:E ~({xL
 ~l
Ell ... EIl{x}
 .s ~
({xl. ) * , ... , ({xl. /)
 Jl Jt
(because J1 is isotone)
Similarly if {~EIl~}
il
Ell ... Ell {~EIl~}i ={~}i ' we obtain
s 1
, ... , (1811)
which is clearly a particular case of (1810). Equivalent forms for P(~) are:
(1815)
or
which motivates the name "range seminorm". The value p(~) will be called the range
of x.
Finally, for the principal interpretation we may define the deviation seminorm:
1 n
max {x}. I {x}. (1816)
i=l, ... ,nl. n j=l  J
which measures the excess of the greatest coordinate of x over the mean value of the
coordinates. Since the "times lin" function is monotone, it is easy to see that
(1816) is a special case of (1810).
Theorem 188. Let El be a preresiduated belt satisfying axiom x12 • Then for each
integer n, the function P defined on En by (1814) is scalefree.
n
()11D{~8A}j) 8 (by (1814»
,
(by axioms X6 , N3 , etc)
It is not too difficult to show similarly that the 0ij norms (1813) and
•
the deviation norm (1816) are also scalefree. However, the maximum seminorm
(184) and the absolute maximum seminorm (187) are not scalefree.
Theorem 189. Let. be a right scalefree Elseminorm defined on a right bandspace
V over a belt El , and let oEE 1 be given. Then the set:
Hence x ID y and x 8 A belong to S(',O), which is therefore a right (sub) space over El . •
Proposition 1810. Let El be a given belt and 0 EEl a given element. Let n?l be a
given intege~ and. a scalefree Elseminorm on En' If £(1), .•• , a(n) E En all lie
in S("O) then so do all elements of the space generated by a(l), ••• , a(n). •
When • is the range seminorm and El is a linear blog, we can identify the subspace
S(',O) exactly. First, by direct computations using (1814) we may confirm the follow
ing results.
Proposition 1811. Let El be a linear blog, n?l a given integer and p the range
seminorm defined on En' For any xEE n , either p(x)?0 or p(~=_oo. Specifically:
o if and only if x =
[ ~"'l
A for some finite ;I.E El •
•
152
Theorem lS12. Let El be a linear blog and let 6~ be a given element of El • Let
n~l be a given integer and let.p be the range seminorm defined on En' Then (with
the notation of Section lSl and (1818»:
S(p.6) = E (6*)
n
Proof. From the data. El is linear. preresiduated and satisfies axiom X12 • so
Theorems 186. 188 and 189 apply. Now by direct use of (lS14) we find that each
column of the matrix I (6*) lies in the subspace S(p.6) and so therefore does each
* n
element of En(6 ). the columnspace of I (6*).
n
Conversely. let ~ES(p.6). Now. if 6=+~ we have. using Proposition 182:
xES(p.6)C: E = E (~) = E (6*).
 n n n
We must now consider the case where 6 is finite. If p(~ = = then by
Proposition 1811. x has all components equal to ~. or else all equal to +~. In
either case !n(6 ) 8
*  ~ = ~ so ~ *
lies in the columnspace of !n(6). The only
remaining possibility (with6 ~ and finite) is that p(~) is finite. so x is finite
by Proposition 1811. Hence we may write:
n n 1
p(~) = ( L .{~l.) 8 ( L '({~}i) ) ~ 6 (1819)
j=l J j=l
•
l.
Proof. The set T={~I~cEn and oo<p(~)<o} is not empty; in fact the ntuple with all
components equal to ~ lies in T. We remark that I (0*) is doubly Gastic and strictly
n .
doubly ~astic and that each ~cT is finite by Proposition 1811, since _oo<p(~)<+oo.
Hence for ~cT we may write (1819) with strict inequality, and deduce (1820).
But (1820) cannot hold with equality for any combination of i,j, else we could close
w.r.t indices i and j and deduce (1819) with equality. Hence:
Dualising:
From (1826) and (1827) we see that! is a strictly doubly ~astic (nxn) matrix, showing
by Theorem 159 and Corollary 16 9 that x has unique expression as a linear combination
of the columns of ~(o*) and that these c~lumns are accordingly SLI. ..
19. SOME MATRIX SPACES
Proposition 191. The range seminorm and columnrange seminorm provide scalefree
Elseminorms on the right bandspace (~n,l) over a linear blog E l , for given integers
m,n~l. •
~ ~
5 1
A 4 2
7 6
The ranges of the columns of A are respectively 6,12,7,8, so P (A) = 12. However, the
c
greatest element of A is 8 and the least is 6, so p(A) 14. This illustrates the
fairly evident fact contained in the next result.
Theorem 192. Let El be a linear blog and let p, p be respectively the range and
columnrange seminorms defined on mn for given integers m,~l.
p (A)
155
AI ,Ql) regarded
Of course, the results of Sections 182 and 183 carryover to (,v'(mn
as a right bandspace over E l ; but further results hold for matrices, arising from our
ability to form matrix products.
Theorem 193. be a linear blog and let Pc denote indifferently the columnrange
seminorms defined on mp for given integers m,n,p~l. Let A mn and
X £ Then:
np
Proof. Each column of A belongs to S(p,p (A», the space of ntuples having range
C
not exceeding p (A», By Proposition 1810 therefore, all columns of A@X do so
c
Hence p(Z(j» < PC(~) for all columns Z(j)(j=l" .. , p) of X. Closing w.r.t. index j:
pc(~@~9 = .I
J=l
Ql CZ(j» ::; pc(~)
The property of Pc asserted by Theorem 193 evidently generalises (1817), and
•
we shall say that Theorem 193 asserts that Pc is (right) factorfree.
We note that Theorem 193 gives us yet another insolubility criterion for the
equation ~@~=~. For example, in the principal interpretation:
A
[;
20
11
13
r: ]
pc(~) = 10. However, P (~) 11,
so b cannot lie in the columnspace of ~ and the equation ~@~=~ is insoluble.
Theorem 194. Let be a linear blog, and let p denote indifferently the range semi
norms defined and AI for given integers m,n,p~l. Let
V'( mp np
Proof. If either(or both)of ~, ~ has all elements equal to _00 then so does ~@~, so
so certainly p(~@~)<+oo. I f ~,~ are both finite then
so is A@B and so p(~@~)<+oo. If one of ~,~, has all elements equal to +00 and the other
does not have all elements equal to 00, then ~@~ has all elements equal to +00 and so
p(~@~)=oo. These exhaust (by Proposition 1811) all the cases in which p(~), p(~)<+oo
Hence
(193)
Now the four factors on the right side of (193) are all finite since pairwise they
form the finite products {~8~}IJ' ({~8~}RT) *. But multiplication is selfdual for
finite elements and commutative by hypothesis; hence (193) gives:
~ ( I ${~}ik) 8 I (j)({~}rt» *
i=l, ... ,m r=l, ... ,m
k=l, ... ,n t=l, ... ,n
8 ( I ${~}kj) 8 L (j)({~}ts» *
k=l, ... ,n t=l, ... ,n
j=l, ... ,p s=l, ... ,p
The necessity of the condition p(~), p(~)<+oo in the proof of foregoing theorem
is illustrated by the following example for the principal interpretation:
~
[
= co
+co +co ]
co A8B = l +_""co]
Lemma 195. Let El be a linear selfconjuRate belt. Then for any given integers
m, n l> I, the corange function p defines an El  seminorm on and in particular
mn
nl). If El is preresiduated and satisfies axiom then P is
left scalefree in the sen~e that p(A 0 ~) ~ p(~) for each and x E: E .
n
And if El is a blog and p denotes the range seminorm defined on En' then for
each ~ E: En there holds:
(ii) p(~) and p(~) both equal 00, or are both finite, or both equal +00.
Proof. All assertions up to and including (i) follow by analogy with the properties
of the function p. Assertion (ii) follows from the fact that we may regard p(2Y as
being of the form b 0 a, where p(~) is a 8 b (a, b E: E l ); it easily follows
from the basic properties of blogR (in particular from Proposition 43) that a 8 b
and b 8 a both equal 00, or are both finite, or both equal +00, for any
a, b E: El • Assertion (iii) follows from the fact that a 0 b = r/J i f and only if
b = aI if and only i f b 0 a = r/J. •
Theorem 196. Let be a linear blog and for given integers m, n ~ 1 let
mn and p(~) < +oo} where p denotes the range
may equivalently be characterised as the set
where p denotes the corange seminorm on mn·
,':;J )homrep statements are both true.
Proof. Suppose ~,! E: ~mn. Since El is linear, p(~) $ pC!) equals either
p(~) or p(!) , whence, using the sublinearity of p
Theorem 194. The rest of the proof of the (j, j , El)homrep statement is
standard.
•
then the (E l , statement follows by the appropriate "leftright variant"
of the foregoing argument using P in the place of p.
We now introduce another seminorm  the rowrange PR' defined for a given
matrix A

E,M
'lmn
as follows:
m
PR(!) = .L It p(~(i»
1.&1
where [~(l), ••• , ~(m~ is the transposed of matrix !,
and P is the corange seminorm defined on En(=~n)' For example, in the principal
interpretation, suppose:
[: :]
5 6
A= 4 8 (195)
4 0
The ranges of the columns of A are 5, 9, 14, 0 respectively, so P (A) 14.
C
The (co) ranges of the rows of A are 11, 12, 9 respectively, so P R (!) 12.
By analogy with Proposition 191 and Theorems 192 and 193, we record the
following
Theorem 198. Let El be a linear blog and Ii a given element of °E l • For any
given integers m, n ~ 1, define the sets:
"t
Jmn
= {A IA
 
EJ[ mn and
159
where PC' PR are respectively the columnrange and rowrange seminorms defined
on mn' Then the )homrep statements both hold.
Proof. ~ always contains the matrix with all elements equal to ~, and so
  Jmn
'J
is nonempty. That C mn' Et) is a right bandspace over El follows from
Theorem 189; that (1 mm' Et, 8) 'i
is a belt over which C mn , Et) is a left band
space follows from Theorem 193. The rest of the proof of the C~, ~ , El)homrep
statement is standard and the argument for ~ is similar. •
If A is as in C195) we have:
[' :l
1
5 4
A*
6 8
5 5 5 J
We find *
pC! ) = 14 = P C~) • Furthermore, we find the ranges of the columns of A*
to be 11, 12, 9 respectively and the Cco) ranges of the rows of A* to be
5, 9, 14, 0 respectively. This illustrates the following result.
Lemma 199. Let El be a linear blog and let PC' pR,p, Phave their usual
Proof. We have:
P cCA*)
m n n
I $ CC I $ C{A} .. )*) 8 C I $ {A} .. ))
i=l j=l  1J j=l  1J
m
I $ PCE.(i))
i=l
where P is the corange seminorm and O:Cl), ••• , E.Cm)] the transposed of A.
Hence Pc CA* ) = PRCA) and similarly Pc CA) = PRCA*). Now:
P CA *) C I $ (A*}sr) 8 C.I $
8=1, ... ,n J=l, ••• ,n
r=l, ... ,m i=l, ... ,m
160
(L
r=l, ... ,m
It ({~} rs) *) ~ (.L1=1, ....,m$ {A}. .)
 1J
s=l, ••• ,n j=l, •.• ,n
(.L
1=1, ••• ,m
$ {~}ij) ~ ( L
r=l, ••• ,m
$ ({~}rs) *) (if E1 is commutative)
•
Lemma 199 allows us to prove the perhaps surprising result that our seminorms
PC' PR and· P enjoy certain properties dual to those which we have already
established, as set out in our next results. First let uS note that by use of the
dual operations $' and ~' in place of $ and ~ respectively, we may introduce
the terms dually sublinear, dually submu1tip1icative and dual E1seminorm; and
further, the terms left and right dually scalefree and left and right dually
fac torfree.
Theorem 1910. Let E1 be a linear b10g and let Pc' PR, P have their usual
significance. Then:
(i) P is (right) dually scalefree and dually factorfree.
(ii) PR is left dually scalefree and left dually factorfree.
(iii) If El is commutative then P is dually submultiplicative.
P (A
c 
@, X) =
 Pc «!* @ ~*) *)
(by Lemma 199)
We remarked earlier that our matrices of bounded range form closed algebraic
systems, and illustrated this fact in Theorems 196 and 198, which are analogous
to Theorem 123 (for acr~astic matrices) and Theorem 124 (for finite matrices).
We now add further results, in the spirit of Theorems 1317 and 1319.
Theorem 1911. Let be a linear blog and for any given integers m, n ,. 1 let
Jmn denote and p(~) < +oo} where P denotes the range semi norm on
mn
mn. Then $') and
(fnm' $, $' ) are commutative bands wi th duali ty,
which are conjugate; and for any given integer p ~ 1, we have:
161
(196)
<:1 '
mapping T : ~H gA (in the usual notation) gives, according to Theorem 510 a
bijection bett~een and '1 mn
ron qu!i subset of Hom...1;1 (AJ
v'(np ,AJ
v·L mp ). However,
our theorem asserts something a little stronger, namely that T is bijective if we
replace each gA by its restriction gA/:t np ' This follows, in fact, by the same
reasoning as leads to Proposition 1013~ The rest follows on taking n = m. ..
denote and
~ nm
denote and
Then (} mn' e, e'} and are both commutative bands with duality,
which are conjugate. rom' e, @, are
both belts with duality, which are conjugate. If then
mn
there holds:
•
20. THE ZEROLATENESS PROBLEM
Lemma 201. Let El be a preresiduated belt satisfying axiom X12 and let A E mn'
l!e:Em• Ifue:Ensatuhes!9~~~,then~~A 9 ~and;
'*
! 9 ~ ~ ! 9 (! 9'~) ~ ~ (201)
Proof. By Lemma 91, if !9~~~ then ~4'*8'Q so (matrix multiplications being isotone):
1!
HJ .£.
[~J
If we subtract £ componentwise from b we get
In more general terms, let El be a selfconjugate belt, and let ~,~EEm with ~'E'
(202)
E
m
and apply to a some function TE(E l ) and regard T(~) as measuring how badly ~
Theorem 202. Let El be a preresidua ted belt satisfying axiom Xl2 and let T be the
maximumseminorm (184) defined on Em' Then problem (20 3) is solved by taking
xA*@'b.
(204)
because
m *
T(~(Z,~)) = iLQl«{~}i) @ {~}i)'
* * * *
(~ @' ~ ) @ ~ >,. «~ @~) @' ~ ) @ b
(205)
•
But from (204), relation (205) is just:
Theorem 202 thus gives, for the principal interpretation, the solution to the
Chebychev problem posed under (112). In fact, because it makes statements only about
the "worst component", Theorem 202 is a good deal weaker than (201) which makes
statements about all components. We can take account of this by introducing, for the
principal interpretation, the Rlnorm defined by
m
m
L i{x}.i (206)
i=l  1
(i=l, ••. , m)
So tt
202. Cases of Equality
Since ! *8'~ gives us the Chebychevbest under approximation to b lying in the
*
columnspace of A, it is intuitive that A 8'b must equal b in at least one coordinate

    *
position, otherwise we could take ~ "a little greater" than ! @'~ and obtain a
"better solution" to problem (203). The following result gives conditions wich are
sufficient to guarantee this intuitive result.
Theorem 205. Let El be a linear blog and let A £ ron' ~ £ Em' If at least one
component of !*8'~ is finite then !8(!*8 ~) has the same value as b in at least one
coordinate position, the common value being finite.
m *
L $' ({A} Jk @'{~}k) is finite.
k=l
It follows from this that both {!* }JK and {~}K are finite. Since multiplication is
selfdual for finite elements, we may rewrite (207) as:
n
L "'({!}K'J 8 {A*@'b}.)
j=l'"   J
(by (315))
165
= {b} (209)
K
Hence ~8(~ *8'E) at least equals £ at its Kth coordinate position, which coupled with
(201) shows that equality occurs at the Kth coordinate position.
Given A

E V
\..Illmn
and b

£ E , find x
m
£ E
n
such that
(2010)
max ({b L  {A 8 x},) is minimised.
i=1, ... ,; 1   1
We simply find the principal solution ~8(~ '* 8'£) and then increase it in all
components by onehalf of the maximum deviation. An example will make this clear.
We take:
A [~1
~]
2
b
Since pc(~) = 5 whilst pc(£) = 7, Theorem 193 tells us that b cannot lie in the
columnspace of A. Let us solve problem (2010) for these data. Now we find
A 8 *
(~8'£)
Ul
This agrees with b in components two and three but underestimates by 8 in component
one. If we add 4= !(8) to all components, we have as our solution:
2
This 3tuple lies in the columnspace of A and has a maximum absolute deviation from
£ equal to 4 (in component one). No other element v of the columnspace of ~ can do
better, else by subtracting a constant (i.e. making a scalar multiplication in
max algebra) we should be able to derive an element ~ of the columnspace of A
violating Lemma 201.
Of course, the procedure generalises to more abstract blogs, but extra axiomatisation
is necessary and this would take us too far afield in the context of the present
memorandum.
166
203. Critical Paths. Let us reconsider the multiplecycle example of Section 93
in the light of Theorem 205.
Taking
A
[; : :J z
4*
We now choose ~~A @'~, say x
As discussed in Chapter 9, the 3tuple ~ gives starttimes consistent with the target
completion times z. Suppose in fact we adopt the starttimes ~ and we compute
 2 3 4
successively A@~, A @~, A @~ and A @x. Fig 201 sets out all the relevant information.
r 0 1 2 3 4
m @ ® 11 14
Ar@x
  1 3 5 8 @
2 4 7 @ 13
2 5 8 13 17
A(4r)*@,z 3 5 7 10 11
 
3 6 8 10 15
Let us now examine Fig 201 to discover whether there are activities which have
zero float; as discussed in Chapter 9 these can be identified by components which have
equal value in the two 3tuples in a particular column. Thus in the column r=4 the
middle component· of the two 3tuples has the common value 11. For convenience we have
ringed all such components in the top row of 3tuples in Fig 201; in all other cases,
of course, elements in the top row are strictly less than corresponding elements in
the bottom row.
We shall call a table constructed in the manner of Fig 201 a double orbit table.
The following result is relevant to the properties of such tables.
Theorem 206. Let El be a linear blog, be doubly Gastic, beE be finite
mn . m
and ~£En satisfy A8~~. If for some i (~i~m there holds {A9~}i  {~}i then for some
j (l~j~n) there hold:
167
Moreover, {A}.. and {x}. are finite, having finite product. Hence using the fact that
 1.J  J
multiplication is selfdual for finite elements:
{b}. )
 1.
Proof. Theorem 206 implies that if some element may be ringed in column r~l of the
table then some element may be ringed in column (rl). And the dual of Theorem 206
implies that if some element may be ringed in column (rl) ~ (Nl), then some element
may be ringed in column r. Hence ringing is possible in all columns, or in none. 4t
If element {Arax}. (r~l) of a double orbit table is ringed, then according to
  1.
Theorem 206 we can find some ringed element {Ar  18x}. in column (rl) such that:
  J
{A} .. a {Arlax}. = {Arax}.
 1.J   J   1.
Let us join all such pairs of elements with a line, as shown in Fig 201. Then
starting from any ringed element in column N, we may trace a continuous path along
such lines, moving left until we arrive at a ringed element in Column O. Such a path
is known as a critical path. In terms of project planning, we may say that no activity
on the critical path may be delayed without causing the activity at the righthand end
of the path to be late, thus violating the zerolateness condition. Hence in a practical
situation the activities on the critical path would attract great management attention.
There may be more than one critical rath. For example, if we choose x=A4*8'z in
the preceding examrle we get the double orbit table shown in Fig. 202.
168
r 0 1 2 3 4
Arex
~3 6 8 10 13
2 5 8 13 17
A(4r)*3'z 3 5 7 10
  11
3 6 8 10 15
Fig 202 contains five critical paths. It is easy to see that Theorem 206
together with its dual implies that all and only ringed elements in column 0 begin
critical paths; that all and only ringed elements in column N end critical paths; that
all and only ringed elements in other columns are intermediate elements of critical
paths.
2l. PROJECTIONS
2l1. Congruence Classes
Let El be a preresiduated belt satisfying axiom X12 and let A £vtmn for given
integers m,n~l. As usual, denote by gA' gA* the functions:
1
(211)
* *
gA: Em >E n where gA <z) A* @'x (x. £ E )
m
In the terminology of sections 84 and 181, (Ran gA,$) is the columnspace of A
*
and (Ran gA,$') the dual columnspace of *•
~
~
gA 0 TI A = TIA 0 gA gA
(213)
TIA 0 TIA lIA *
lIA 0
*
lIA
*
lIA
* and gA\Ran
Now we know from Lemma 810 that gA\Ran gA * gA are mutually inverse bijections;
but we cannot say that they constitute isomorphisms since it is not at once clear how
* and Ran gA with the structure of commutative belts with duality.
to endow Ran gA It
is clear that (Ran gA,$) is a commutative band and (Ran gA,$')
* a (dual) commutative
band, but in general the dualities in (En,$,i'), (Em,$,$') do not extend to Ran gA
*
and Ran gAo Thus, although it is true to say:
(214)
It is to this problem that we now address ourselves. First, in the light of (214)
and its dual, we record the following.
Proposition 211. Let El be a preresiduated belt satisfying axiom X12 and let
A £ mn for given integers m,n~l. Using the notation of (211), define binary relations
~, ~ on En' Em respectively by:
170
Let us call the congruence classes established by (215) the gAcongruence classes and
*
and gAcongruence classes respectively.
Theorem 212. Let El be a preresiduated belt satis~ying axiom X12 and let A e:
~or given integers m,n~l. Then each gAcongruence class of En conta4ns exactly one
element of Rang! and each g!congruence class of Em contains exactly one element of
~.
i.e. (216)
* * Then gA(~)
* = (gAogAogA)(u)
* * * *
•
gA(gA(~)) = gA(gA(~))' =(grOgAogA)(v) = gA(.!)· The
argument for *
the gAcongruence classes is dual.
Fig 211
E E
n m
171
(217)
Theorem 214. Let El be a preresiduated belt satisfying axiom X12 and let A £ n
for given integers m,n~l. Then (RangA,ffi, ') is a commutative band with duality, when
~, is as in (217).
We now prove the associativity of the operation ~'. From (213) and (217),
.
(~ffi '~) ffi'
*
~ ~ ~ ffi' v ffi' w (2110)
*
(by dual linearity of gA)
172
= u
(by (217)
= u.
The proof is now complete.
Obviously (RangA,a) admits the same right scalar multiplication as (Em' a) and
•
is therefore a subspace of (Em,a). Now let us define a right multiplication:
(2111)
Theorem 215. Let El be a preresiduated belt satisfying axiom X12 and let A£
DIll
for given integers m,n)1. Then (RangA,ED') is a (dual) right bandspace over El
Proof. Let ~,~ E RangA; A, ~EEI. We confirm (appropriate duals of) S2' S3' S4 of
Section 51, as follows.
(by 2111»
*
gA«(gAonA)(~8'~»8'A) (by dual of (51»
 
(by 213)
(by (2111)
(by 2111»
(by 213)
*)
(by dual linearity of BA
(by (2111»
*
(by dual linearity of gA)
*
(by dual linearity of g!)
(by (213»
(by (2111»
*
(by dual linearity of gA)
(by (2111»
The following theorem now establishes the isomorphism discussed in Section 211:.
Theorem 216. Let El be a preresidua ted belt satisfying axiom X12 and let A £
mn
for given integers m,n)l. and * .
(RangA,@,ffi') are isomorphic as
right spaces with duality over El when m', $ are as in (217) and its dual respectively,
* are as
and dual scalar multiplication for RangA and scalar multiplication for RangA
Proof.
• *.
That (RangA,$,$') and (RangA,$,$') are spaces with duality is established in

the preceding results.  and gA* provide mutually inverse bijections. Now
Moreover, gA
for all *
~,y' £RangA , we have:
(by (213»
*
(by dual linearity of gA)
(by(2l7» •
(by (212»
*
! 9'~ = .L i«!'*8'~(j»9AJ')
S •
J=l
213. Projection matrices. In Section 211 we introduced the projection operators
~!' ~~ for a given matrix! EV¥mn. If n=l, consider the projection operators ~I'
~*~* for given I E Em. For arbitrary ~ E Em there holds:
(2113)
\
175
We shall call ~(l) the projection matrix associated with l and ~* (l) the dual
projection matrix associated with l. (These definitions may appear to be the wrong
way round, but the following result motivates the choice of names).
Theorem 217. If El is a blog, and l £ Em is finite, (for given integer m~l) then
for any ~ £ Em there hold:
} (2114)
{n.@'(t; @'x)
 1 
* 
The following result is the exact analogue of a classIcal result for projection
matrices  namely that they are just the idempotent selfconjugate square matrices.
Theorem 218. Let El be a blog, and let ~ E~ (for given integer m~D be finite.
Then necessary and sufficient conditions that P be the projection matrix associated
with some t; £ E are:
 m
(i) P = P* and
(ii) ;2=p (so ~2*=~*)
Proof. Suppose firstly that ~=~(l). Then l is finite, since each product {t;}.@'({t;}.) *
 1  J
is finite. Hence:
(i) {p*}.. ({P} .. ) * ({n.@'({t;}.) * ) *
 1J  J1  J  1
(because l finite)
{p}. .
 1J
176
2
(ii) {P }.. =
 1J
m
k=l
L.( ~i8' «P k ) *) Q
*
({~\ 8' ({~)j) ))
{Pl..
 1J
* =P 2=P=P* =P 2* =P8'P*
P8P  
Now for given i,j (~i~m; l~j~m), we have:
{P8P* L.
 1J
E E.
m
*
But P=P8'P. Hence !:.=~8'i *=!(i)
We consider now some properties of the matrix !:.(i) when i is chosen from the
columnspace of a given matrix.
•
Lemma 219. Let E be a preresiduated belt satisfying axiom ~
given
n integers m,n~l), *____________ ____
have columns a(l), ••• , a(n)
=~
and let u
________ ~=_ ~~
Eun______
E__ • _=~
However from Theorem 88 with u and A in the roles of A and! respectively:
~ A0'A *
n
{ L ED' (~(k) G' (a(k»*)} ..
 1J
k=l
n
~(D':; L ED ~(~(k» A 0 A *
k=l
A 0 A *
•
But if f, ~(k) are finite then by Theorem 218, * (f)
~ = ~(f) and P* (~(k»
~(~(k» and the result follows.
We shall make use of the properties of projection matrices later, in Chapter 26.
22. DEFINITE AND METRIC MATRICES
(221)
of nodes ikEN such that ikEF(i k _ l ), (k=1,2, .•. , t). The nodes i O"'" it will be said
to be ~ the path (221), and the path will be said to pass through them.
The length of the path (221) is t.
The path (221) is said to be an elementary path if the nodes i O"'" it are pairwise
distinct. The length of an elementary path is thus at most nl.
A path (221) is called a circuit, if iO=i t , and it is an elementary circuit if iO=i t
and the path (io,i l , ••• , it_I) is elementary.
If the circuit consists of a single arc, then it is called a loop. The length of a
circuit is its length as a path; in particular, loops have length one. Elementary
circuits have length at most n. Notice that the definition of a circuit does not
regard e.g. (i O,i l ,i 2 ,i O) as being the same circuit as e.g. (i l ,i 2 ,io ,i l ).
To each path (i O,i l ,i 2 , .•• , it) of a weighted graph (N,F), we may associate a
(path) product (if iO=i t then (circuit) prbduct)defined as:
a. • 8 ... 8 a. • (222)
101 1 1 t _ 11 t
If El is a belt with duality then path dual product and circuit dual product are
defined in the obvious way.
For example, in Fig 221, where an abstract graph is given its usual visual
presentation and the weights are from the principal interpretation:
(1,2,3) is an elementary path from 1 to 3. The path product is a 128a 23
~ (2) + (4) 6.
(1,2,3,1) is an elementary circuit. The circuit product p is given by
a128a238a31 = (2) + (4) + 0 = 6. The length of this circuit is t=3.
A graph (N,F) is called strongly complete if for every pair of nodes i,jEF,
there holds jEF(i). For a strongly complete weighted graph we can define the
tnn by:
associated matrix A E.JJ
~~~~~~~~~
Conversely, given such a square matrix over El , we can obviously define the
associated (strongly complete, weighted) ~ by first constructing a strongly
complete graph of order n by setting N={1,2, ... , n} and F(i)=N for all iEN, and then
introducing the weights a ij as in (223) .
As a standard notation, we shall use b(~) to denote the graph associated with a
given matrix ~.
In the sequel, we shall make frequent use of the following simple considerations.
If A is a given matrix then:
n
{A 2} .. = L (D({~} ik 0 {~}kJ·) (i,j=l, ... , n)
 q
k=l
so each element of A2 is a summation of path products of paths of length two in
b (~). Similarly
{A r } .. = L(DPk
 1J k
where each Pk is a path product for a path of length r from i to j in b(~) and the
summation is over all possible such paths. Hence also {ArL .>Pk for every such path . •
 1J
100
222 • Definite Matrices. Suppose now that El is a blog. and! E~n. Adapting a
term from Carr~ [17] we shall say that B is definite if it satisfies the following
condition:
Certainly. (227) is true for k=l. since if t(T)=l then T is a loop and so
elementary. whence p(T)~p(~) holds with ~=T.
Suppose hypothesis (227) then true for a particular value of k~l and let a be
any nonelementary circuit having t(a)=k+l. (Such circuits certainly exist  e.g.
(iO.i O••••• i O »·
Suppose a is (iO.i l ••.•• ik.i O). Now. as we trace a starting from io and
visiting in turn i l ••.•• i k • let j be the first node to be encountered for the
second time. Then a contains an elementary subcircuit n which runs from the first
occurrence of j to the second occurrence of j. Let T be the circuit obtained by
tracing a from iO to the first occurrence of j. and then from the nodes following
the second occurrence of j to the end.
Then p(n)~0 by (226). Now p(a) is calculated as a product of terms consisting
of those making up p(T). interrupted by (or preceded by or followed by) those making
up n. So by isotonicity. p(a)~p{T). But evidently t(T)<k. so by (227) there
exists an elementary circuit ~ of A(!) such that p(T)~p(O). Hence also
p(a~p(o).
In the light of (228) it is clear that (224) and (225) are equivalent.
Theorem 223.Let El be a blog and let B E'Y (for given integer n~) be either row_
 ~nn
0astic or column=0astic. Then B is definite.
Proof. Suppose! is row0astic. Obviously p(a)~0 for each cycle a in A(!) since
p(a) is a product of elements {!}ij~0. Now for each index i=l ••••• n. let c(i) be
181
the lawest index (l{c(i)""n) such that {~}ic(i)=0, and cansider the path:
where c 2 (1) = c(c(l) etc. Since the path (229) cantains (n+l) terms drawn fram a
set af n indices, it must cantain a repeated term, and thus a circuit a (say).
Evidently p(a)=0. The argument when B is calumn0astic is similar. ~
Thearem 224. Let El be a blag and l~t ! E~n far given integer n>l. If B is
definite then So'is Br far any integer r>O.
Praaf. By canventian, Ba is I , the (nxn) identity matrix whase diaganal elements are
n
all 0, and whase aff diaganal elements are all 00. Evidently, In is definite. So'
assume r:;.l.
The circuit praduct p af a given circuit a af length t in 6(!r) has the farm:
p(a) = b.
10il
iii ... iii b.
1t _ l i t
with i
t
= iO (2210)
farm I Pk~' where each Pk~ is a path praduct af a path fram i k _ l to' i k af length r
~
in 6 (!), the summatian with respect to' the dummy variable ~ being aver all such paths.
Hence, far all hI'···' h t :
On the ather hand the graph 6(!) cantains,by hypathesis, a circuit T af length s
(say) and af circuit praduct p(T) = 0. Suppase Tis (j0'\'···' js) where js=ja
and cansider a praduct af r identical terms:
(2212)
Hence (2213)
Obviously (2213) exhibits a circuit in 8 (!r) , having circuit product at least equal
to 0. Together with (2211), this shows that Br is definite. ..
And if El has a duality, we define the dual metric matrix generated by ! as:
Proof. If n=l then (Ie!)nl is by convention I and the result follows. Otherwise,
since I,! commute, we may carry out the iterated multiplication (I8!)8 ••• 8(I&!)
to obtain IeL e(Powers of !). Each power of ! occurs at least once up to and
. 1ud'~ng _Bn  l ands~nce
' _Br &!r =!r ,we h ave
.
~nc
Proof. Fix i,j (l~i~n; l~j~n). Each path n of length n+l from i to j in 8(!) , the
graph associated with !. must contain a circuit a, say, since some node must recur.
The path product p(n) is therefore a product of the circuit product p(a) for this
circuit, together with factors making up a path product p(T) for a path T of length
t(T)~n. But p(a)~0 by hypothesis, so p(n)~p(T) by isotonicity.
Now p(T) ~ {Bt(T)}.. (by Proposition 221)
 ~J
{r(B)}. .
  ~J
Hence p(n) ~ {r(B)}..
  ~J
Closing w.r.t. n (regarded as indexing all paths of length (n+l) from i to j):
183
Hence ~n+l~~(~), Assume now that ~n+s~~(~) for some integer ~l, Then by isotonicity,
Hence the result holds by induction for r>n, and is clearly trivial for l~r<n. ..
is definite then:
(r~l)
n 2
Proof. (i (~) ) 2 (~ !il ... !il ~ )
Moreover:
B @ (I !il ~)r = B !il ... !il Br +l ~ i(~) (by Theorem 226)
 '[I
B !il
(from (2214»
Now we can readily confirm that L'(!*) is just (!(!»*, so (2218) may be written
.
equ1valently: (!(!» ~ ~ (£(!» *, which is the dual of the first result in Corollary
227, for r=2.
0 7 3 5
7 0 3 1
B* 3 3 0 6 (.!.S&!)
*
5 0 3
1 6 3 0
0 6 3 5 8
6 0 3 4 1
(.!.S&!)2* 3 3 0 8 4
5 4 8 0 3
8 1 4 3 0
0 6 3 5 7
6 0 3 4 1
(.!.S&!> 4* 3 3 0 7 4
5 4 7 0 3
7 1 4 3 0
0 6 3 5 7
6 0 3 4 1
* * 4* 3 3 0 7 4
(! (!» =! 8' (.!.s&!) =
5 4 7 0 3
7 1 4 3 0
Thus * * * (2219)
! (!) = (!.(!» .. !'(!)
The preceding discussion leaves out any consideration of computational efficiency,
since our present aim is the logically prior one of establishing algebraic structure.
But see further [17J and [39J.
23. FUNDAMENTAL EIGENVECTORS
A @ x = \ @~ (231)
Proposition 231. L~l be a blog, and let ~ sJVmn for given integers m,n>l.
Then there exists a matrix B SiAl for which every mtuple in the columnspace
~~ *
of A is an eigenvector with corresponding eigenvalue 0. Namely, ~=~'~ tt
In conventional linear algebra, it is necessary to stipulate that an eigen
vector be nonzero by definition, in order to avoid trivialities. Analogously,
we shall say that (231) is finitely soluble (when El is a b1og), if we can
find finite \ and x satisfying (231).
Lemma 232. Let El be a b10g and let ~ s~nn' for given integer n>l. A necessary
condition that the eigenprob1em for ~ be finitely soluble is that B be row G astic;
a sufficient condition is that B be row¢astic.
{B} .. can be +00 and no row of B can consist entirely of oo's. Hence B is rowG
 1J
astic.
n
{B @ x}.
1
L a/{Bl.. @ 0)
j=l  1J
=0 (by (121))
={\@x} .
 1 tt
The following lemma will be useful in subsequent arguments.
Lemma 233. Let E be a b1og, and let ~ s n for given integer rt?l. Let 0 be a
circuit in ~(B), of circuit product p(o) and let T, of circuit product p(T), be
any circuit obtained from 0 by cyclic permutation of the nodes on o. Then p(T)~0
if and only if p(o)~0, where ~ is any given one of the symbols

186
Proof. Suppose a is (iO,i l , ••• , it_l,i O)' If p(a) is not finite then at least one
of {!}i i , ••• , {!}i i is infinite, and rearrangement of the factors of p(cr) will
lroduceOtAe same (infiai~e) value for p(T).
Now suppose all factors of p(a) are finite and that:
{B}. •
 1 1
tl 0
•
And evidently we may proceed to generate all cyclic permutations in this way •
Moreover, the argument goes through similarly when ~ is = or ~.
Our study of (231) will extend over several chapters. First we shall
establish some results for definite matrices. These will then enable us to prove
some fundamental results regarding the eigenproblem for row0astic matrices.
Finally, we shall extend these results to all matrices for which (231) is finitely
soluble, by finding a transformation of such matrices into row0astic matrices.
So suppose a certain matrix B over a blog El is definite. Then ~(!) contains
at least one circuit with circuitproduct equal to 0. An eigennode of ~(!) is any
node on such a circuit. Two eigennodes are equivalent if they are both on anyone
such circuit. Lemma 233 implies that this defines an equivalence relation.
{r(B)L.=0
  JJ
Conversely, if El is linear, and {r(B)} .. =0 for some index j (l~j~n) then j is an
  JJ
eigennode.
~. If k(l~k~n) is any index, then each circuit from k to k of length r~l in ~(!)
has circuit product ~0 by hypothesis. Closing w.r.t that class of circuits we have by
Proposition 221 that {!r}kk~0 and hence by definition (2214):
On the other hand, j is on some circuit in ~(!) of length r (say), with circuit
product 0, by hypothesis. By Lemma 233 we can assume this circuit to run from j to
j. Hence by Proposition 221, {B r } .• ~ 0, so by Theorem 226:
 JJ
{r(B)}.. ~0
  JJ
Hence {!(!)}jj = 0. Now if k(~k~n) is any index, then any circuit cr of length r~l
from k to k in ~(L(!» has a circuit product p(a) which by Proposition 221 satisfies
187
Theorem 215. Let El be a blog and let B € ,for given integer n)l, be definite.
If j is an eigennode of b(B), then B@~(j)=~(j) where ~(j) is the jth column of I(~).
(233)
(i=l, ... , n)
i.e.
which together with (233) implies the required result.
In other words, Theorem 235 states that columns of ~(~) which correspond to
eigennodes furnish eigenvectors for ~, with corresponding eigenvalue 0. We call such
columns of L(~) the fundamental eigenvectors of D. Such eigenvectors need not be
finite, even for a row0astic matrix. Consider for example:
B [: ~J
~ is row0astic, and L(~)=~' Both columns of B are fundamental eigenvectors,
but neither is finite.
eigennodes hand k respectively then f(h), f(k) are finite scalar multiples of one
188
another.
n
(,L tB({~<'~)}iJ' S {U~)}jk)) s q:. . (!)}kh
J=l
Now. by hypothesis. nodes hand k are both on some circuit cr in ~(!) of circuit
product p(cr)=0.
By Lemma 233 we may assume that cr consits of a (not necessarily elementary)
path a from h to k of length t(a) and path product p(a) (say). followed by a
(not necessarily elementary) path e of length t(e) and path product p(e) (say)
from k to h. Evidently:
Hence
l(k) ~ l(h) @~
1
Theorem 236 shows that two equivalent eigennodes of 6(~) determine essentially the
•
same eigenvectors of~. We call two fundamental eigenvectors i(h), i(k) equivalent
if nodes hand k are equivalent, otherwise we say they are nonequivalent.
In future arguments it will sometimes be. convenient to assume that the indices
i~l, ... , n (and therefore the nodes in 6(~) have been (re)allocated in such a way
that equivalent nodes are numbered consecutively. Specifically, if there are q
eigennodes, which fall into r equivalence classes containing sl,s2' ... , sr eigen
nodes each, with sl + .... + sr q ~ n, then we shall say that ~ is blocked if
indices are (re)allocated in such a way that nodes 1, .•. , sl are equivalent eigen
nodes, ... , nodes (sl + + sr_l+l ), ... , (sl + ... + ~r) are equivalent eigen
nodes, and nodes (sl + + sr+l) to n (if any) are not eigennodes.
l
~ 3 1
B
6
2
2
5
0
:1
3
_c
5 4 2
,;:l
3 1
o 2
2 o
5 3
Equivalent
~ .
None~genvector
There are two circuits in 6(~) having circuit product ~O, namely:
(1,2,1) and (3,3)
Hence the eigennodesare 1,2 and 3, with 1 equivalent to 2. So B is blocked. If we
ring the elements in B which contribute to circuit products of value ~, we see that
they fall within blocks around the principal diagonal, corresponding to equivalence
classes of
1st ::::::f;p
~5
/
~_!_~f:i =~:_ °2:~ j =
5, 14
i .
2nd block None~gennode
190
Evidently I(!) may be partitioned in the same way. We note en passant that the
first column of I(!) may be obtained from the second column by adding 3 to each
component, thus illustrating Theorem 236.
•
 1J
making up the circuit all eaual 1/1. Hence the elements of B form a ~astic set. The
converse is trivial, and the rest follows from Theorem 223.
definite. Then:
(i) I(!) is l/1astic definite, and!r is· l/1astic definite for each integer r)l.
(since {B}·k'
 1
{B}k·
 J
~ ~).
Similarly {B r } •• ~ 1/1 (i,j=l, ••• , n) and so {r(B)} •. ~ ~ (i,j=l, ••• , n). But
 1J   1J
definite by Theorem 224, and I(!) is definite by Lemma 234.
(ii) From (i), {I(!)}kh ~ 16, so from (2310) in the proof of Theorem 236,
I(h) ~ ~(k). Similarly ~(k) ~ ~(h) since equivalence is a symmetric relation.
(by (ii»
Proof. If the lemma is false then in the light of Theorem 238 (iii) the only possi
(2313)
~(B) on which nodes j and k both occur. But in the light of (2313), this would
imply equivalence of j and k. •
Theorem 2310. Let El be a linear blog and let ~ € , for given integer n,l be
~astic definite. If f(jl)"'" f(js) are fundamental eigenvectors corresponding to
(2314)
~ = {fUh)}j 9 Cl h
s
1
Hence Cl h is finite and Cl h {f(jh)}j ~ ~ (since ~(~) is ~as tic defl,ni te) (2315)
s
However, ~ ~ {~(js)}. (since ~(~) is ~astic defini te)
 Jh
Evidently from (2315) and (2316) the only possibility is ~=0, giving also
{.£.(js)}J' = (I = {~(jh)}' (from the previous working).
h  Js
Bltby Lemma 239 this contradicts the assumption that jh and js are not eQuivalent ...
We conclude with the following lemma, which we shall require later.
Lemma 2311. Let El be a blog and let B £ , for given integer n~l, be rowrlastic.
Let .£.(jl)"'" .£.(js) be a maximal set of nonequivalent fundamental eigenvectors for
! and let !(B) £ ns be the matrix whose columns are l(jl)"'" .£.(js) in that order.
Then t(B) Ts doubly (Ias tic •
Proof. It follows from Theorem 223 and Theorem 238 (iii) that !(!) is column(lastic.
So:
o~ t
{! }jd :!i: {.!:.(!)}jd (2319)
{t(B)} ••
  JJh
= {~(jh)}'
 J
~ 0 (2320)
Summarising, we see that (2317) and (2318) imply that row j of !(!) is
(Ias tic when j is an eigennode, whilst (2317) and (2320) imply the same when
j is a noneigennode. Hence !(!> is rowrlastic and so doubly 0astic. •
24. ASPECTS OF THE EIGENPROBLEM
Lemma 241. Let El be a blo~ and let B E nn' for ~iven inte~er nill, be definite. Then
each element of the ei~enspace of ~ is an eigenvector of B with corresponding eigenvaluE>
equal to 0. Any maximal nonequivalent f(js)"'.' f(js) defines the same eigenspace.
s
Proof. Any element ~of the eigenspace may be written: Ll~(jk) I<l "k) ("kEEl' k=l, ••• , s) ;
Theorem 236 shm,s that each nonequivalent f (j l' ••. , f(Jks") give the same eigenspace.
s s
Now, B 0 u = B 0 \~lf(jk) !! "k» ( Lf)(~ !! f (jk» !! elk)
k=l
•
It is easy to see that the total set of eigenvectors of ~ which have corresponding
eigenvalue 0 is a space, containing the eigenspace of B. Our next aim is to derive
sufficient conditions for all finite elements of this space actually to lie in the
eigenspace of B.
Lemma 242 Let El be a blog and let ~ Eli n' for given integer n~,1, be 0astic definite.
Let ~ E En be an eigenvector of B having corresponding eigenvalue 0. If h, k are(distinct)
Proof. By hypothesis and Lemma 233, there is some circuit starting at node hand
passing through node k, with circuit product 0. Let this circuit be (h,h l , ... , ht_l,h).
Then
(241)
and since {~}hh , ..• , {~}tlh $ 0 but (by (241» are all finite Corollary 412 shows
that {~}hh = 1 = {~}tlh = 0. Then, since ~ = ~ Q ~, we have by iteration:
1
n
jLf) ~hj!! {.!!\ ~ (by (315»
= {~}h ~ .•....................•••.•.
1
..................•........ ;: {~}tlh Q {~}h
•
{~}h
:r~F~{7fn~~·~~~~";~~t}}~t~~~[1J~ Hl
If we assume the matrix ~ is blocked. we can display the situation as in Fig 241.
(243)
But this would indicate a cycle composed exclusively of noneigennodes and having
cycleproduct 0. a contradiction.
Hence some eigennode d occurs in (244) and there holds by repeated use of (242):
195
(246)
n
Hence {n}J' = LID C!'..(~)} jk II {.!l)k (from (246» (247)
k=l
~ {BL (.) II
 JC J
•
J
= {n}. (2410)
 J
(by (315»
•
 J
Taking Theorem 244 with Theorem 2310 we see that every finite eigenvector of B
having corresponding eigenvalue ~ is a linear combination of certain linearly
independent fundamental eigenvectors.
Theorem 244 is the essential key to the eigenproblem for more general square
matrices. The following discussion establishes the connection. We shall say that
two matrices A. B E.Af over a blog are directly similar if there are n finite
  V'"{nn 1
elements nl ••••• n E El such that {B}.. = n. 8 {A} .. 9 n.. We shall write:
n  1.J 1.  1.J J
A:!.
Lemma 245. Let El be a blog. Then: is an equivalence relation onJt.n for any

given integer n~1. If A. B E~ ~nn
such that'
B = P 8 A 8 Q and P 8 Q = Q 8 =1 • the (n><n) identity matrix. Then Br =
P 8 Ar 8 Q for all integers r~O and reB) = P8r(A)8Q.
. 1 1 .
If Pd1.ag (n l ••••• nn ) and Q = d1.ag (n l ••••• nn)' we readily verify that
=
•
r=l
n
= P 8 ( l ~r) 8 Q = !'.8r(!)8Q.
r=l
Lemma 246. Let El be a blog and let A. B E. n' for given integer n~l. be directly
similar. Then A is definite if and only if B is definite; and then a set of indices
j1 ••••• js give a set of nonequivalent eigennodes in 6(!) if and only if they give
a set of nonequivalent eigennodes in 6(B).
with corresponding circuit products p(cr) and p (T). We have, for suitable finite
elements n l ,···, nn £ El :
1 g
p(T) (n i ) {A}. . 9 n.
 1 1 11
0 o 1
1
9 (n. ) 9 {A}. . 9 n.
1,  1112 12
1
(n i ) 9 p(cr) 9 ni
0 0
It follows that p(T) ~ ~ if and only if p(cr) ~ ~, where ~ is any of the relations
~,=,? Hence A is definite if and only if ~ is definite. Hence also, if ~ and ~ are
definite, then the same indices are eigennodes in ~(~) and in ~(~), and two indices
give equivalent eigennodes in ~(~) if and only if they give equivalent eigennodes
in M~). •
Lemma 247. Let El be a blog and let ~, ~ £~nn for given integer n~l be such that
A: B. Let jl"'" js+l (s~l) be indices (1.;\", n; kl, ... , s+l). Then column js+l
futby hypothesis, {B}.. n~l II {A}.. 9n. (i,j=l, ... , n) for certain finite scalars
 1J 1  1J J
ni(i=l, ••• , n). HencS
1
ni 9 {A}.. ~ n.
 1J s + l J s +l
1 s
n·  0( L '" ({A}.. 9 ex )) 9 n·
1 k=l'"  1J k k Js+l
h ex n 9 6 9 n:l Pr.e and post multiplying, we obtain the desired result.
were k = jk k J s +l
The converse fiolds by the symmetry of the relation:. The argument for the rows is
similar.
Proof. If a is a circuit in 6(a8!) with circuit product p(a)=0, let T be the circuit
in 6(S8!) determined by the same nodes as cr. If alS, suppose without loss of ~enerality
that a<S. Since the product peT) is obtained by replacing a by S in the expression for
pea), it follows from Corollary 412 that peT) > pea) = 0. But this contradicts the
definiteness of S8A. Hence a=S. And evidently a is finite, since pea) = 0 contains a as
a~~.  •
Theorem 249. Let El be a linear blog and let ! E~n for given integer n~l. If the
eigenproblem for! is finitely soluble then every finite eigenvector has the same
unique corresponding finite eigenvalue A. The matrix AI 8 ! is definite, and all
finite eigenvectors of ! lie in the eigenspace of Al8A. The nonequivalent fundamental
eigenvectors which generate this space have the property that no one of them is linearly
dependent on (any subset of) the others.
Proof. Suppose we can find A,ll E El and I, !!. E En' all finite, such that
! 9 1. = A 9 I and ! I} !!. = II I} !!..
n
Now ~",({A} .. I} U;}.) = A I} U;}' (i=l, .•. , n)
j~l"  l.J  J   l.
Hence, by the linearity of El there is for each index i=l, .•• , n a (leas~ index
c(i) (~c(ikn) such that:
{!} ic(i) I}
{Il c(i) A 8 {I;L (i=l, •.. , n)
 l.
(by (2414».
argument we have
({s}.)
1
@ Is 1
a({r(\ 0A)} .. 0 Sk) (i=l, ... , n)
 1 k=l''' 1J k
But (2418) says that I is a linear combination of columns in ~(\l@~) having the same
indices as those used in (2417).
Since B % \1 0 A and both Band \l@A are definite, Lemma 246 then implies that
 1
I is a linear combination of nonequivalent fundamental eigenvectors of \ @~, i.e.
1
that I lies in the eigenspace of \ @~. Moreover Theorem 2310 and Lemma 247 imply
the required independence of the nonequivalent fundamental eigenvectors of \ l@~. •
We shall call the unique scalar \ (wben it exists) in Theorem 249 the principal
eigenvalue of A.
25. SOLVING THE EIGENPROBLEM
Proof. Since E! is linear, either x>y or ~y. In the latter case, x2~x8y{y2 and by
iteration, xn~y. So a~b, whence a=b. But then x=y by the uniqueness of x and y
entailed in (251). So x)y in all cases.
Now suppose a~. On putting b=y~, we deduce x~. And if b~~ we deduce Y$:~ on
putting x=a~. So on interchanging the roles of x and y, a and b we infer that ~0
where the summation is taken over all elementary cycles in 6(!). This notation will
be standard in the sequel.
For example, if ! is the matrix associated with the graph of Fig 221, we may
compute the following data
Elem circuit Circuit Product Circuit length Circuit mean
(1,1) 5 1 5
(2,2) 6 1 6
(3,3) 7 1 7
(1,2,1) 5 2 5/2
(2,3,2) 2 2 1
(3,1,3) 1 2 1/2
(1,2,3) 6 3 2
(3,2,1) 0 3 0
201
In the above table, we work in the principal interpretation. Thus for circuit (1,2,3)
the circui t product is:
(2) " (4) " (0) = (2) + (4) + 0 = 6
And to get the corresponding circuit mean we must solve:
~t ~"~,, ~ = 6
i.e. (in conventional arithmetic): 3~ = 6 or ~ = 2.
Since we are working with a commutative belt, we have not included in the table any
cyclic permutations of the circuits already recorded.
From the table, we see that A(~ = 7. It could happen in general, of course, that
A(!) occurred in association with more than one circuit.
Lemma 252. Let EI be a radicable blog and let ! E~n' for given integer n~l. If A
is either row or columnGastic then A(!) is finite.
Proof. From the linearity of EI , there is an elementary circuit a in ~(a~!) for which
the maximum is athained in (253), i.e.:
(254)
If a is (io,i l , •.• , i t _ 1 ,i o ) (of length t) then the circuit product for a is:
pea) = {a9A} • •
1. 1.
o 1
at 9 peT) (255)
where T is the cycle in 8(!) determined by the same nodes as determine a, having circuit
product peT), (and length t).
From (253), A(!) ~ircuit mean for T
whence (A(!»t ~(circuit mean for T)t
=peT) (256)
Hence (~(a»t = pea) (by definition)
a t 8p(T)
where a is the (now not necessarily maximal) circuit in 8(a(l!) determined by the same
nodes as determine T. Hence:
lI(a) = a(l"A(!)
whence from (253): "A (a8!) ~ a(l"A(!>_
Combining this with (257) yields the required result. 4t
Corollary 254. Let E1 be a commutative linear radicab1e b10g and let !E' nn for
g1ven 1nteger n~. If ).(!) is finite then the matrix ().(!»1 ~ A is definite.
A(A 1 M )
•
Hence A(~) (by Lemma 253).
And this shows in particular that A(~) is finite.
Theorem 255 shows that the finiteness of A(~) is a necessa~ condition for
finite solubility of the eigenproblem for A. It is not in general a sufficient
condition. For example if:
Hence every 2tuple in the eigenspace has its second coordinate equal to _00. The
following result establishes necessary and sufficient conditions for finite solubility.
Theorem 256. Let El be a commutative linear radicable blog, and let ~ £~n for
given integer n'll. Then the eigenproblem for A is finitely soluble if and only if:
A(~) is finite and !«A(~»18~) is doubly Gastic, where !«A(~»18~) is any
~~trix whose columns form a maximal set of nonequivalent fundamental eigenvectors
for the definite matrix (A(~»18~.
Proof. If the eigenproblem for A is finitely soluble, then by Theorem 255, A(A)
is finite, and from the proof o~ Theorem 247, the matrix (A(~»18~ is direct7y
similar to a certain row0astic matrix~, and hence I«A(~»18~) is directly
similar to feB) (Lemma 245). Now the matrix ~(B), which takes the same columns from
feB) as ~«~(;»18A) takes from f(A) is, by L;~a 246 and Theorem 2311, doubly 0
     1
astic, and so certainly doubly Gastic. Since we can derive !«A(~» 8~) from
~(B) by mUltiplying matrix elements by finite scalars (by virtue of the direct
~i~ilarity), it follows that !«A(~)18~) is also doubly Gastic.
~ : !«A(~»18~) 8 x
(by Corollary 127). So u is a finite element of the eigenspace of (A(~»lt9~.
1 
Thus, (A(~» 8~) 8 ~ = ~ (by Lemma 241)
Hence ~8~ = A(~)8~
showing that the eigenproblem for ~
The foregoing corollary shows that the eigenproblem is finitely soluble for a
•
substantial class of matrices of practical importance, namely finite (square)
matrices under the principal interpretation for E l : for the extended real numbers
form a commutative linear radicable blog, as may easily be verified.
The next corollary identifies another class of matrices, which we shall discuss
again later, for which the eigenproblem is always finitely soluble.
Corollary 258. Let El be a commutative linear radicable blog and let ~ E~n'
for given integer n>,l,be rowGastic. If ~(A(~))l@~) has n eigennodes (not
necessarily nonequivalent) then A is doubly Gastic and the eigenproblem for A is
finitely soluble.
Proof. Since A is rowGastic, evidently then A(~) is finite by Lemma 252 and so
(A(~))l@~ is definite by Corollary 254. Hence the term 'eigennodes' is permissible
in connection with ~«A(~))l@~). If in fact ~«A(~))18~) has n eigennodes then for
each index i(i=l, .. , n) there is an index h (l~h~n) such that {(A(~))l@~}hi contri
butes to a circuit product equal to ¢. Hence (A(~))l@~ has a finite element on each
column and so is doubly Gastic, and then so is any sum of its powers, by Theorem 123.
Hence from (2214), I«A(~))l@~) is doubly Gastic and by hypothesis all its
columns are (not necessarily nonequivalen~) fundamental eigenvectors. Let these
columns be 1(1), .. , l(n) , and define ~ = I ~ l(j). since I«A(~))l@~) is rowG
astic, ~ is finite. We have: j=l
Hence
25 4.
A@u = A(~)
following two sets (if nonempty) are spaces over G, the group of El :
If the first of these spaces is nonempty, then the two spaces are identical.
Proof. The verification that the sets are spaces is routine. Evidently, set (i) may
equivalently be characterised as set (iii): The set of finite eigenvectors of
(A(A))1 0A having 0 as corresponding eigenvalue. Now, if set (i) is nonempty then A
hasfinit;lY soluble eigenprob1em, so ~«A(~))10~) is doubly Gastic, by Theorem 256.
So set (ii) consists of finite ntup1es by Corollary 127, and by Lemma 241 is a subset
of set (iii) = set (i). Conversely, if b is in set (i) (and so finite) then the equation
~«A(A))10A) 0 x = b is soluble becauseby Theorem 249, b lies in the eigenspace of
 1    1
(A(~)) 0~, which is generated by the columns of ~«A(~)) 0~).
But since b is finite and ~«A(~))10~) doubly Gastic, the equation ~«A(A)10A)0x=b
has a principal solution which is finite, by Theorem 144, so ~ lies in se~ (ii). it 
We shall refer to set (i) in the above theorem as the finite eigenspace of ~;
it yields all finite solutions to the eigenprob1em for A. (Notice that the concept
finite eigenspace is applicable to matrices in general, whereas the concept eigenspace
applies only to definite matrices).
With the aid of the foregoing theorems, together with the results of previous
chapters, we can now lay down a programme for the complete (finite) solution of the
eigenprob1em for a given matrix ~ (for the principal interoretation, say). First we
calculate A(A), which is a welldefined function of the elements {A} ... Then we
 1  1J
evaluate ~(A(~) 0~) and select columns having {~}jj = 0, which are the fundamental
eigenvectors. By pairwise comparisons we can discover and eliminate equivalent
fundamental eigenvectors. If the remaining columns do not form a doub1yGastic matrix,
then the finite eigenprob1em for A is insoluble. If they do, then we have the entire
finite eigenspace at our disposal by taking finite linear combinations of these columns.
All but one of these computational tasks are very straightforward to organise, are
easily carried out by hand for matrices of low order, and take only modest amounts of
computer time to execute for larger matrices. The exception is the evaluation of A(~)
if one proceeds by direction examination of all elementary cycles. The following theorem,
however, makes the task much more manageable.
Minimise A
Subject to A + x1•  xJ. ~ {A}.. (258)
 1J
where the inequality constraint is taken over all pairs i,j for which {A} .. is finite.
1]
206
are clearly feasibly for the above linear programming problem, and give the object
function A the value A(~).
Now the dual of the above linear programming problem [3::iI can be wri tten in terms
of real variables Wij' one for each finite {~}ij' as:
I w ••
1.J
I w .•
J1.
0 (i=l, .•• , n)
j j
All w •• ~ 0
1.J
where summations are over all pairs i,j for which {~}ij is finite.
Let (i o ' i l , ••• , itI' it) (with it=i o ) be a circuit in 6(~) with circuitmean equal
to A(~) and length t. Choose values for the w •• as follows:
1.J
1
1
(k=l, ••• , t)
t
(2511)
w ••
1.J
o otherwise
J
(Obviously {A}. . is finite (k=l, ••• , t) since it contributes to A(~), so each
 1. k _ l 1. k
It is easily confirmed that (2511) gives a set of values feasible for the dual
problem (2510), giving its object function the value:
t
(1.)
t
I
k=l 
{A}. .
1. k _ l 1.k
A(~)
Hence we have found feasible solutions for a linear program and for its dual, giving
•
equal values to the two object functions. So by the theory of linear programming, this
common object function value is in fact the optimal value for both problems.
Even if the eigenproblem for ~ is not finitely soluble, the linear program in
Theorem 2510 will always yield a finite solution if A is rowGastic, for the following
reason. If A is rowGastic, then by Lemma 252, A(~) is finite. Now the dual object
function (2510) can take at least the value A(~) as the proof of Theorem 2510 shows,
but is not unbounded because the primal problem is always feasible (we take
207
Xi = 0 (i=l ••••• n) and A ~ max{A} .. ). Hence the linear program in Theorem 2510
i.j  1J
has a solution. with optimal value of A given by A ~ A(!).
Now. if the linear program yields A>A(A). we can always detect this fact. because
(in max algebra notation) the matrix AIliA then has all circuit means < CJ which is
easily seen to entail that L(AllI!) has all diagonal elements < CJ. Hence we may draw
the flowdiagram of Fig 251 so as to cover all contingencies.
[!],
However. the marketing department has given delivery promises which entail that the
n.x' followin, ",01., '"., b. o..,l.,.d no, 1.", 'hon ,i.. , "
In order not to have perishable finished goods ready too early. the maximum earliness
relative to b must be minimised for these next following cycles. and after that the
regular pattern of activity must ensue.
We calculate as follows:
By linear progranmcing A(!) = 3
['
1 3
Hence (A(!))lll! 2 . o =B (say) (2513)
2 2
00 1 1
r J r:
1 2 1 1
2 0 0 0
i1
B2
= 4 3 4 5 3 0
3 2 4 4 3
['
1 0 0 1 2
B4 = 2
2
5
0
3
4
1
1
0
:]
1
2
2
3
0
3
4
0
0
1
lJ (2514)
208
All columns of I(~) have diagonal element equal to 0. so all are fundamental
eigenvectors. We accept the first column of I(~) and by comparison we observe
that columns three and four are scalar multiples of column one and so are equivalent.
(This assertion is the converse of Theorem 236 and is easy to prove. For if
{I(~)}hh = {I(~)}kk = 0. and column k is a scalar multiple a of column h then
{I(~)}kh ~ a = 0 and 0 ~ a = {I(~)}hk. So {I(~)}kh ~ {I(~)}hk = 0 showing, by
linearity of El , the existence of a circuit passing through both nodes h and k
having circuit product 0).
0 _1]
[
2
2
°
3
(2515)
3 4
= c (say)
.!!
[1 ]
= (say)
Since ~.!! = A(~) ~ d =,£, the 4tuple .!! gives the required starttimes for the next
following cycle.
209
8 y
A e: M is given
Fig 251
Is
A NO
~ Eigenprob1em not
, finitely soluble
YES
A by
programming
NO
of r
26. SPECTRAL INEQUALITIES
261 • Preliminary Inequalities One of the more famous results of conventional linear
algebra is the spectral theorem which relates a selfconjugate linear operator ~ to a
sum of projection operators associated with the eigenvectors of~. In minimax algebra
we can, somewhat analogously, prove certain spectral inequalities, for which in fact
an assumption of selfconjugacy is not necessary.
We shall develop these spectral inequalities from a set of more general inequalities
which we prove after the following introductory remarks.
There are eight different formal products which we can derive by inserting one 8
symbol and one 8' symbol in the triple product AA*~ and then bracketing to give a well
formed formula. The associative law reduces the number of algebraically distinct
products to six, of which two particular ones are the projections:
Theorem 261. Let El be a preresiduated belt satisfying axiom X12 , let ~ C~mn'
~,~ C En and ~ C Em for given integers m,n~l. Let ~~@~ and ~_~@f~. Consider the
following sequence of relations:
Then ail inequalities to the right of the dotted line are valid. Moreover, if El is
a blog and .f, ~ are finite then the relations to the left of the dotted line are also
valid.
Proof. We consider the second row of these inequalities, starting from the right. The
first inequality follows from Lemma 83 and the second follows from Theorem 63 (axiom
X12 for matrices). The next follows from Lemma 201, noting that .f8(.f*8'b)~~ from
Theorem 88 with .f in the role of A.
~,~ c En and ~ c Em for g1ven 1ntegers m,n~l. Let the columns of ~ be ~ 1 , ... ,
~(n) c Em in that order. Let i=~@~ and !1.=~@f~. Consider the following sequence of
relations:
211
I
* *). 7T* A*
:.'l
) I 7T
* *
g~fj~* < : "I
;[.
"A = ~(I! ".::( j) :: gAfiII'A*$i E
J m
Then all relations to the right of the dotted line are valid. Moreover, i f El is a blog
and 1, ~ are finite then the inequalities to the left of the dotted line are also valid.
Proof. In the second row of given relations, everything follows by notational change
in Theorem 261 except for the equality "A = I "_a(j)' which is proved as follows:
 J(I!
;For arbitrary £. E Em and for i=l, ••. , m we have:
•
n
={ L ,,(.)(b)}.
j =1 .:: J  1
Theorem 263. Let El be a preresiduated belt satisfying axiom X12 ' and let ~ E nn
for given integer n~l. If s(jk) E En (k=l, .•. , s) are eigenvectors of A with
= A 0 I@ (1*@'£.)
Hence (261)
•
If we write (261) with A=1t and! =f(jk) and close with respect to k, we have the
required result.
Theorem 85 furnishes an example of the foregoing result. For since (!8'!*)8!=!,
then each column ~(j) (j=l, ••• , n) of! is an ei~nvector of !8'!* with corresponding
eigenvector 0. Hence by Theorem 263: gA8'A* ~ L aW (.)' a result already contained
j=l ~ J
in Corollary 262.
Suppose now with the notation of Theorem 263 that we have another eigenvector
f(js+l) E En of! with corresponding eigenvalue As +l • Evidently Theorem 263 implies:
s+l
g! ~ kIla (~81rf(jk»
Hence on taking this extra eigenvector f(js+l) we may get a "better" spectral
inequality. However the following result shows that under particular circumstances
a certain set of eigenvectors gives a ''best'' spectral inequality which cannot be
improved by taking yet another eigenvector into consideration.
given integer n~l. If (jk) E En (k=l, ••• , s) are eigenvectors of! with a common
corresponding eigenvalue A EEl' and iEEn lies in the space generated by f(j]), ••• ,
In particular, this holds if El is a linear blog, f and A are finite, and f(j , ••• ,
f(js) are a maximal set of nonequivalent fundamental eigenvectoIS of A 8A.
•
w(
 k=l  Jk
263. The Other Eigenproblems. Let us now agree that the title the eigenproblem
for A (problem (231» is an abbreviation of the right eigenproblem for A. Then we
may define the (right) dual eigenproblem and the left eigenproblem and left dual
eigenproblem for ~ £~nas the following three problems respectively:
We shall refer to (231) , (262) , (263) and (26 L, ) for short as the four
eigenproblems for ~.
We may also define left and right eigenproblems and dual eigenproblems for ~*
but this leads to nothing new since e.g. the left dual eigenproblem for ~* is
essentially just the right eigenproblem for~. We shall call n in (262), .f in (263)
and X in (264) a (right) dual eigenvector, left eigenvector, left dual eigenvector
respectively of ~, with corresponding eigenvalue )1,\1,8 respectively. The epithet
'right' will be suppressed unless needed for emphasis. Finite solubility of an
eigenproblem means solubility with both eigenvector and corresponding eigenvalue finite.
It is evident that appropriate left and dual variants of the material of Chapters
21 to 25 may be established, as in the following propositions.
Proposition 265. Let El be a blog and let ! £~n' for given integer n>l, be definite.
I f j is an eigennode in 6(!U then ~(j)~! ~(j) where ,;,CJ) ~s the Jth row ot £<.!!). If
k is an eigen node in 6(~ equivalent to j and .fCk) ~s the kth row of IC!) then .fCJJ,
.f(k) are finite scalar multiples of one another. If El ~s actually a l~npar blog and
B is 0astic definite and jl"'" js are pairwise non equ~valent e~gen nodes in 6C!)
and .f(jl)"'" l(js) are the corresponding rows of .I(!) then no one of .fCJl)'···'
l(js) is left linearly dependent on the others. Finally ~f El ~s a l~near blog and B
is as tic definite and .f £
eigenvalue 0, then £ lies ~n 18 a matrlx
•
whose rows 1, ... , s are rows jl"'" J s respect~vely of _! '
constitute a maximal set of nonequivalent eigennodes in 6(!).
•
6(~~A ). The rows of !(~~A ) have the property that no one of them is left linearly
We shall call the unique scalar A (when it exists) in Proposition 266 the
1
principal left eigenvalue of!. The notation !(!8A ), with the significance
explained Proposition 266, will be standard throughout the sequel.
We observe that the ~(~) mentioned in Proposition 265 is the same as the ~(~)
in the corresponding results of Chapters 23 and 24, showing that the left and right
eigenproblems for! are by no means unrelated.
In fact the notions of circuit, ~(~) and ~(~) are essentially rowcolumn symmetric
and so therefore are the notions definite, ~astic definite, A(!), eigennode and so
on.
This relationship between the left and right eigenproblems is brought out in the
following result and its corollary.
Lemma 267. Let E be a linear blog and let A E for given integer n~l. If
a,S EEl are such that a8! and !8S are both definite then a=S (are are finite).
Proof. Since both a and S are contained as factors in some circuit product equal to
~, both are finite. Now for i,j=l, ••. , n:
{A8S}.. S18{S8A} .. 8 S
 1J  1J
•
Hence !8S % S8! and so by Lemma 246, S8A is definite. But so is a8A. Hence by
Lemma 248, a=S
Corollary 268. Let El be a linear blog and let! E· nn for given integer n~l. Then
the principal eigenvalue of A and the principal left eigenvalue of !, if they both
exist, are equal.
If A is a matrix for which both the eigenproblem and the left eigenproblem are
finitely soluble, then both A 18A and A8A l are definite and in fact A 18A: A8A 1
1 1     ,
so ~(A 8!) ~ I(!8A ) and any linear dependencies among the rows (or columns) of
I(A18!) are duplicated among the rows (or columns) of I(!8A l ) and vice versa, by
Lemma 247. Moreover, if we assume (in the terminology of Section 232) that A18A
is blocked then by Lemma 246, !8A l is also blocked in the same way.
An example will make this clear. Taking! as in (2512) we obtain the definite
.
matr1x ~=A 1 8! .
as 1n ( 513
2 ) and then ~ ( ~) .
as 1n (251
4) • We have a 1 ready observed
that columns three and four of ~(~) are scalar multiples of column one but that
columns two and one are not (right) linearly dependent. We can now confirm that
exactly the same pattern of dependencies holds among the rows of ~(~), namely that
rows three and four are scalar multiples of row one but that rows two and one are
not (left) linearly dependent.
The same would hold if we took B instead as A8A l but this would lead to nothing
new in the present example since scalar multiplication is ~ommutativein the principal
interpretation.
215
In view of Corollary 268, we shall simply use the expression principal eigenvalue
when discussing either the left or the right eigenproblem for a matrix over a linear
blog.
264. More Spectral Inequalities. The spectral inequality of Theorem 263 related
the operator gA to the projection operators associated with the eigenvectors of ~, and
followed from the identities of Chapter 8. Now if we assume that the eigenproblem
for A is finitely soluble then we can obtain other spectral inequalities by quite
different arguments. These inequalities relate the matrix A to the projection
matrices associated with the eigenvectors of ~, although we can recast them in
oper ator form.
It is convenient to introduce the following notation.
{M" ~ {B}"
 1J  1J
(i=l, ... , m; j=l, ... , n), but for each i=l, ... , m 1 (265)
there is at least one j O,;j:;n) such that {A}"
 1J
= {B},.
 1J J
We can paraphrase (265) by "~~.!!. wi th equality at least once per row". We shall
equivalently say that the inequality is rowtight. (266)
By interchanging the roles i,j in (265) we can in the obvious way define a relation
«, of columntight inequality i.e. ~«I.!!. if and only if ~~ with equality at least
once per column. Extension of the notation to operators follows evidently.
We now prove some spectral inequalities, for which we recall the notation
~(f), ~(I) for projection matrices, introduced in Chapter 21.
Theorem 269. Let El be a linear blog and let A £ nn for given integer n~l. If the
eigenproblem for A is finitely soluble, and ~(jk) (k=l, ... , s) are any finite eigen
vectors of A then:
s
A « A@' L iii' ~(I(jk» ( 267)
k=l
And by the linearity of El , equality holds in (268) (and so in (269» for at least
one index j per i=l, •.. , n (the attained maximum). In other words, (269) may
216
be written
(2610)
(2611)
•
If we write (2611) with ~=~(jk) and close with respect to index k. we have the
required result.
Theorem 2610. Let El be a linear b10g and let A € n for given integer n~l. If the
eigenproblem for A is finitely soluble and ~(jk) (k=l ••..• s) are any finite eigen
vectors of A then:
(2612)
Proof. From (2610) we have for arbitrary x € En and arbitrary finite eigenvector ~:
(2613)
A8
I
1T
*~* (2614)
If we write (2614) with ~=~(jk) and close with respect to index k. we have the required
result. •
It is evident that versions of the preceding inequalities hold for all four eigen
problems for!. In particular. there will be a principal dual eigenvalue A' for! by
obvious analogy with A.
We define (2616)
where the summation is taken over all elementary cycles of ~(!). This notation will
be standard in the sequel.
Intuitively, X'(~ is the least circuit (dual) mean in ~(~, just as XC!) is the
greatest circuit mean. Thus, if ! is the matrix associated with the graph of
Fig 221 we infer from the data in the table in Section 251 that X'(!)=5/2.
•
for given integer n~l. If the left or right dual eigenproblem for A is finitely
* follows from
Proof. The finite solubility of all four eigenproblems for! (and for!)
Corollary 257 and its variants. And since ~9A=~X(A) for some finite left eigenvector
* * * * *     1 *
~, we have A ~'~ = (X(A» 5'~ sO (X(A» (i.e. (X(A» ) must be X'(A ) by uniqueness
~f the prin~ipa~ dual eigenvalue for A*. Similarly ~(A*) = (X'(A»l.
Now if i E[)«X(!~19!) then by ~efinition there is an eige;node j in ~«X(A»lQA)
such that either i is column j of I«X(!»19!) or 1* is row j of r«X(A»l QA ). In 
the first case, i is an eigenvector of A which is finite because I(X(!»lQ!)is finite.
(2611):
By
A « X(A)
 ~Ip(f;)
* (2618)
In the second case, i is a finite left eigenvector of ! by Proposition 265, and
the appropriate variant of the proof of Theorem 269 leads to:
A «' !'.<,p9X(~)
= X(!Hl'~(i) (2619)
(since El is commutative and XC!) is finite).
Combining (2618) and (2619) and closing with respect to the index j:
! ~ XC!) 9' lEt,!'.(i) (2620)
i
218
And since (2618) is rowtight and (2619) is columntight, evidently (2620) is both
row and columntight. Summation is over [)«A(A))18A).
Now relation (2620) for the matrix ~* is:  
Now ~
* (,!l) '" ~(~) by Theorem 218 and (A(~
* )) * A'(~). Hence, using the
commutativity of E1 :
where summation is over [] «A(A*))18A*) and the inequality is both rowtight and
columntight. Evidently (2620) and (2623) yield the required conclusion. •
We remark that inequality (2617) cannot be improved by using further eigenvectors
belonging to finite solutions to any of the four eigenprob1ems for A because of
Theorem 264.
Evidently Theorem 2612 covers an important set of cases for the principal inter
pretation and we illustrate the theorem with the following example.
1
We obtain L«H~)) 9~)
There are two nonequivalent eigennodes, namely 2 and 3. Hence we construct four
projection matrices from:
:]
0 4
Col 2: 4 0 Col 3:
2 2
n
3 2
Row 2: 0 1 Row 3:
1 0
So we confirm that ~~A(~)@ '!, the inequality being both rowtight and columntight.
Now *
A 6
' and A(~ )* =1
l:
3 1
J
j
4 3
[:
7
* 1 e~)
We obtain i«A(A)) * 4
2
2
5 0
There are two eigennodes but we observe that column three is obtained by subtracting 2
from column one. Hence there is just one nonequivalent eigennode, namely 1. We
construct two projection matrices from:
[;
,
[: n
0 2 t 7
~J
Col 1: 0 Row 1: 0
2 5
1
: 1
r:
A' (A)@Q 1
1 1
...J
So we confirm that A'(~)@g~~ the inequality being both rowtight and columntight.
27. THE ORBIT
Viel mehr als nur Dokumente.
Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.
Jederzeit kündbar.