Sie sind auf Seite 1von 232

Lecture Notes in Economics and Mathematical Systems

(Vol. 1-15: Lecture Notes in Operations Research and Mathematical Economics, Vol. 16-59: Lecture
Notes in Operations Research and Mathematical Systems)
Vol. 1; H. BUhlmann, H. Loeffel, E. Nievergelt, EinfUhrung in die Vol. 30; H. Noltemeier, Sensitivitatsanalyse bei diskreten linearen
Theorie und Praxis der Entscheidung bei Unsicherheit. 2. Auflage, Optimierungsproblemen. VI, 102 Seiten. 4°. 1970. DM 16,-
IV, 125 Seiten. 4°. 1969. DM 16,- Vol. 31; M. KOhlmeyer, Die nichtzentrale t-Verteilung. II, 106 Sei-
Vol. 2; U. N. Bhat, A Study of the Queueing Systems M/G/1 and ten. 4°. 1970. DM 16,-
GIIM1. VIII, 78 pages. 4°.1968. DM 16,- Vol. 32; F. Bartholomes und G. Hotz, Homomorphismen und Re-
Vol. 3; A. Strauss, An Introduction to Optimal Control Theory. VI, duktionen linearer Sprachen. XII, 143 Seiten. 4°.1970. DM 16,-
153 pages. 4°.1968. DM 16,- Vol. 33; K Hinderer, Foundations of Non-stationary Dynamic Pro-
Vol. 4; Branch and Bound; Eine EinfOhrung. 2., geanderte Auflage. gramming with Discrete Time Parameter. VI, 160 pages. 4°.1970.
Herausgegeben von F. Weinberg. VII, 174 Seiten. 4°.1972. DM 18,- DM 16,-
Vol. 5; Hyvarinen, Information Theory for Systems En9ineers. VIII, Vol. 34; H. Stormer, Semi-Markoff-Prozesse mit endlich vielen Zu-
205 pages. 4°. 1968. DM 16,- standen. Theorie und Anwendungen. VII, 128 Seiten. 4°.1970.
DM 16,-
Vol. 6; H. P. KOnzi, O. MOiler, E. Nievergelt, EinfOhrungskursus in
die dynamische Programmierung. IV, 103 Seiten. 4°.1968. DM 16,- Vol. 35; F. Ferschl, Markovkenen. VI, 168 Seiten. 4°.1970. DM 16,-

Vol. 7; W. Popp, EinfUhrung in die Theorie der Lagerhaltung. VI, Vol. 36; M. P. J. Magill, On a General Economic Theory of Motion.
1 73 Seiten. 4°. 1968. DM 16,- VI, 95 pages. 4°. 1970. DM 16,-

Vol. 8; J. Teghem, J. Loris-Teghem, J. P. Lambone, Modeles Vol. 37; H. MUlier-Merbach, On Round-Off Errors in Linear Pro-
d'Anente M/G/1 et GI/M/1 il Arrivees et Services en Groupes. IV, gramming. VI, 48 pages. 4°. 1970. DM 16,-
53 pages. 4°. 1969. DM 16,- Vol. 38; Statistische Methoden I. Herausgegeben von E. Walter.
Vol. 9; E. Schultze, EinfUhrung in die mathematischen Grundlagen VIII, 338 Seiten. 4°. 1970. DM 22,-
der Informationstheorie. VI, 116 Seiten. 4°. 1969. DM 16,- Vol. 39; Statistische Methoden II. Herausgegeben von E. Walter.
Vol. 10; D. Hochstadter, Stochastische Lagerhaltungsmodelle. VI, IV, 155 Seiten. 4°. 1970. DM 16,-
269 Seiten. 4°. 1969. DM 18,- Vol. 40; H. Drygas, The Coordinate-Free Approach to Gauss-
Vol. 11/12; Mathematical Systems Theory and Economics. Edited Markov Estimation. VIII, 113 pages. 4°.1970. DM 16,-
by H. W. Kuhn and G. P. Szego. VIII, IV, 486 pages. 4°. 1969. Vol. 41; U. Ueing, Zwei Losungsmethoden fUr nichtkonvexe Pro-
DM 34,- grammierungsprobleme. VI, 92 Seiten. 4°.1971. DM 16,-
Vol. 13; Heuristische Planungsmethoden. Herausgegeben von Vol. 42; A. V. Balakrishnan, Introduction to Optimization Theory in
F. Weinberg und C. A. Zehnder. II, 93 Seiten. 4°. 1969. DM 16,- a Hilbert Space. IV, 153 pages. 4°.1971. DM 16,-
Vol. 14; Computing Methods in Optimization Problems. Edited Vol. 43; J. A. Morales, Bayesian Full Information Structural Analy-
by A. V. Balakrishnan. V, 191 pages. 4°.1969. DM 16,- sis. VI, 154 pages. 4°. 1971. DM 16,-
Vol. 15; Economic Models, Estimation and Risk Programming; Vol. 44; G. Feichtinger, Stochastische Modelle demographischer
Essays in Honor of Gerhard Tintner. Edited by K. A. Fox, G. V. L. Prozesse. XIII, 404 Seiten. 4°. 1971. DM 28,-
Narasimham and J. K. Sengupta. VIII, 461 pages. 4°.1969. DM 24,-
Vol. 45; It Wendler, Hauptaustauschschrine (Principal Pivoting).
Vol. 16; H. P. KOnzi und W. Oettli, Nichtlineare Optimierung; 11,64 Seiten. 4°. 1971. DM 16,-
Neuere Verlahren, Bibliographie. IV, 180 Seiten. 4°. 1969. DM 16,-:
Vol. 46; C. Boucher, Levons sur la tMorie des automates ma-
Vol. 17; H. Bauer und K Neumann, Berechnung optimaler Steue- thematiques. VIII, 193 pages. 4°.1971. DM 18,-
rungen, Maximumprinzip und dynamische OptijTlierung. VIII, 188
Seiten. 4°.1969. DM 16,- . Vol. 47; H. A. Nour Eldin, Optimierung linearer Regelsysteme
mit quadratischer Zielfunktion. VIII, 163 Seiten. 4°. 1971. DM 16,-
Vol. 18; M. Wolff, Optimale Instandhaltungspolitiken in einfachen
Systemen. V, 143 Seiten. 4°. 1970. DM 16,- Vol. 48; M. Constam, FORTRAN fOr Anfanger. VI, 148 Seiten.
2., verbesserte Auflage. 4°.1973. DM 16,-
Vol. 19; L. Hyviirinen Mathematical Modeling for Industrial Pro-
Vol. 49; Ch. SchneeweiB, Regelungstechnische stochastische
cesses. VI, 122 pages. 4°. 1970. DM 16,-
Optimierungsverfahren. XI, 254 Seiten. 4°.1971. DM 22,-
Vol. 20; G. Uebe, Optimale Fahrplane. IX, 161 Seiten. 4°.1970.
Vol. 50; Unternehmensforschung Heute - Ubersichtsvortrage der
DM 16,-
ZOricher Tagung von SVOR urrd DGU, September 1970. Heraus-
Vol. 21; Th. Liebling, Graphentheorie in Planungs- und Touren- gegeben von M. Beckmann. VI, 133 Seiten. 4°. 1971. DM 16,-
problemen am Beispiel des stadtischen StraBendienstes. IX,
Vol. 51; Digitale Simulation. Herausgegeben von K Bauknecht
118 Seiten. 4°.1970. DM 16,-
und W. Nef.IV, 207 Seiten. 4°. 1971. DM 18,-
Vol. 22; W. Eichhorn, Theorie der homogenen Produktionsfunk-
Vol. 52; Invariant Imbedding. Proceedings of the Summer Work-
tion. VIII, 119 Seiten. 4°.1970. DM 16,-
shop on Invariant Imbedding Held at the University of Southern
Vol. 23; A. Ghosal, Some Aspects of Queueing and Storage California, June-August 1970. Edited by R. E. Bellman and E. D.
Systems. IV, 93 pages. 4°. 1970. DM 16,- Denman. IV,148 pages. 4°.1971. DM 16,-
Vol. 24; Feichtinger, Lernprozesse in stochastischen Automaten. Vol. 53; J. RosenmOller, Kooperative Spiele und Markte. IV, 152
V, 66 Seiten. 4°.1970. DM 16,- Seiten. 4°.1971. DM 16,-
Vol. 25; R. Henn und O. Opitz, Konsum- und Produktionstheorie. Vol. 54; C. C. von Weizsacker, Steady State Capital Theory. III,
I. II, 124 Seiten. 4°.1970. DM 16,- 102 pages. 4°.1971. DM 16,-
Vol. 26; D. Hochstadter und G. Uebe, Okonometrische Methoden. Vol. 55; P. A V. B. Swamy, Statistical Inference in Random Coef-
XII, 250 Seiten. 4°. 1970. DM 18,- ficient Regression Models. VIII, 209 pages. 4°. 1971. DM 20,-
Vol. 27; I. H. Mufti, Computational Methods in Optimal Control Vol. 56; Mohamed A. EI-Hodiri, Constrained Extrema. Introduction
Problems. IV, 45 pages. 4°. 1970. DM 16,- to the Differentiable Case with Economic Applications. III, 130
Vol. 28; Theoretical Approaches to Non-Numerical Problem Sol- pages. 4°.1971. DM 16,-
ving. Edited by R. B. Banerji and M. D. Mesarovic. VI, 466 pages. Vol. 57; E. Freund, Zeitvariable MehrgroBensysteme. VII, 160 Sei-
4°. 1970. DM 24,- ten. 4°.1971. DM 18,-
Vol. 29; S. E. Elmaghraby, Some Network Models in Management Vol. 58; P. B. Hagelschuer, Theorie der linearen Dekomposition.
Science. III, 177 pages. 4°. 1970. DM 16,- VII, 191 Seiten. 4°. 1971. DM 18,-

continuation on page 221


Lectu re Notes
in Economics and
Mathematical Systems
Managing Editors: M. Beckmann, Providence, and H. P. KOnzi, lOri<

Operations Research

95

M. Zeleny

Linear Multiobjective
Programming

Spri nger-Verlag
Berl in' Heidel berg . New York
Editorial Board
H. Albach · A . V.Balakrishnan' P. Dh ry mes . J. G reen ' W . Hildenbrand
W . Krelle . K. Ritter' R. Sato . P . Schonfeld

Dr. Milan Zeleny


G raduate School of Business
Uris Hall
'Columbia University
New York, NY lOO27/USA

AMS Subject C lassifications ( 1970): 90'04, 9OAO), 90CO). 62C25

ISBN - 13: 978-3-540-06639-2 e-ISBN- 13: 978-3-642-80808- 1


DO[ : [0. 1007/ 978-3-642-80808-1

Thi~ work to copyright. All rights are re~tved, wh ether th e whole or part of th e material is concerne.::l,
i~ ~ubje<:t
s~cifically tho~ of tunslation, reprinting, re' u~ of illustrations, broadcasting, reproduction by photocopying machine
or similar means, and storage in data hanks.
Under § )4 of the German Copyright Law where copies are made for other than private use, a ef e is payable to the
publisher, the amount of the fee to be determine.::l b)· agreement with the publisher.
C by Springer·Verlag Berlin· Heidelberg 1974. Library of Congress Catalog Card Number 73'22)77.
Softcover reprin t of the hardcover I st edition 1974
Offselprinting and bookbinding:) ulius Belu, HemsbachfBergstr.
Acknowledgment

Parts of this study derive from a thesis defended at the


University of Rochester in 1972. I am immensely indebted to my
advisor, Dr. P. L. YU, now at the University of Texas, for his
continuous encouragement, patience and reassurance. He has generated
the persistence and faith which has proven to be absolutely essential
for completing the work.
'Most of the research and technical efforts were carried out
at the University of South Carolina which had provided the computing
facilities. My thanks go to Dr; J. Eatman for his programming
assistance.,
I dedicate the work to my wife, Dr. Betka Zeleny, who helped
to transform countless discouragements into strong motivational
impulses.

Milan Zeleny
Columbia University,
New York, Autumn 1972
ABSTRACT

One of the more persistent criticisms of current decision-


-making theory and practice is directed against the traditional
approximation of mUltiple goal behavior of men and organizations
by a single, technically convenient criterion.
This criticism extends also against traditional decision-
-making tools like mathematical programming, game theory, etc.,
where a single objective function is often used to approximate
essentially mUltiobjective situations.
In this research we develop theory and algorithms which may
be applied to linear programming problems involving multiple,
noncommensurable objective functions. Such a mUltiplicity of
objectives induces substitution of a single optimal solution
(which can no longer be safely determined) by the whole set of
nondominated solutions.
In Part I of this paper we explore the first method for lo-
cating all nondominated extreme points which is based on multipara-
metric programming. The t- dimensional parametric space, which
can be interpreted as the set providing all possible wighted com-
binations of t objectives, is decomposed into a finite number of
subspaces. These subspaces provide sets of optimal weights imput-
ed to each objective function by corresponding extreme point solu-
tion. The decomposition also provides a criterion upon which
the decision about nondominance of any particular extreme point
can be based.
In Part II, A Multicriteria Simplex Method, we introduce
second method for locating all nondominated extreme points, in-
dependent of any parametric considerations. It represents simple
generalization of the conventional single-objective simplex me-
thod. It appears that the decomposition of the parametric space
can be viewed as a significant byproduct of this method.
In Part III, Generating of All Nondominated Solutions, we
VI

develop a technique for generating all nondominated solutions


from a given set of nondominated extreme points. In multiobjective
situations the superiority of an extreme point solution, over
one which is non-extreme, no longer applies.
In Section 5 some important topics indicating future develop-
ments are treated. Alternative approaches, allowing potentially
faster and simpler generation of nondominated extreme points, are
discussed. Problems of nonlinearity and "nondominance gaps" are
analyzed and their general resolution suggested. An important
problem of choosing the final solution among possibly large
number of nondominated solution is analyzed extensively.
Appendices contain a ~hort note on redundancy among linear
constraints, printout of FORTRAN code for multicriteria simplex
method, and some examples ?f outputs.
Table of Contents

1. Introduction 1

1.1 The Origin of the Multiobjective Problem and a


Short Historical Review 1

1.2. Linear Multiobjective Programming

1.3. Comment on Notation 5

LINEAR MULTIBOJECTIVE PROGRAMMING I.

2. Basic Theory and Decomposition of the


Parametric Space 8

2.1. Basic Theory - Linear Case 10

2.2. Reduction of the Dimensionality of the Parametric


Space l~

2.3. Decomposition of the Parametric Space as a


Method to Find Nondominated Extreme Points of X 15

2.~. Algorithmic Possibilities 29

2.5. Discussion of Difficulties connected with the


Decomposition Method ~o

2.5.1. Some Numerical Examples of the Difficulties ~2

LINEAR MULTI OBJECTIVE PROGRAMMING II.

3. Finding Nondominated Extreme Points - A Second


Approach (Multicriteria Simplex Method) 63

3.1. Basic Theorems 63

3.2. Methods for Generating Adjacent Extreme Points 80

3.3 Computerized Procedure - An Example 93

3.~. Computer Analysis ll~


VIII

LINEAR MULTI OBJECTIVE PROGRAMMING III.

4-. A Method for Generating All Nondominated


Solutions of X. 122

4-.1. Some Basic Theorems on Properties of N 122


4-.2. An Algorithm for Generating N from Known
N 126
ex
4-.3. Numerical Examples 136

4-.3.1. An Example of Matrix Reduction 136

4-.3.2. An Example of Nondominance Subroutine 14-1

5. Additional Topics and Extensions 14-6

5.1. Alternative Approach to Finding Nex 14-6

5.1.1. The Concept of'Cutting Hyperplane 14-9

5.1.2. Nondominance in Lower Dimensions 158

5.2. Some Notes on Nonlinearity 162

5.3. A Selection of the Final Solution 167

5.3.1. Direct Assessment of Weights 169

5.3.2. The Ideal Solution 171

5.3.3. Entropy as a Measure of Importance 176

5.3.4-. A Method of Displaced Ideal 180


Bibliography 183

Appendix: 186
Al. A Note on Elimination of Redundant
Constraints 187

A.2. Examples of Output Printouts 197


A.3. The Program Description and FORTRAN
Printout 202
List of Figures

2.1.1. Relation between i?:. and Ii. 12


2.3.1. Two-Dimensional Demonstration of Theorem
2.3.6. 26
2.3.2. Three-Dimensional Demonstration of Basic
Decomposition Theory 26
2.'+.1. Connectedness of a Ii. Space 31
2.'+.2. Problem of Adjacent Polyhedra 32
2.'+.3. Relation Between Faces and Extreme Points 3'+
2.'+.'+. Block Diagram for STRATEGY I 35
2.,+.5. Block Diagram for STRATEGY II 38

2.5.1. Problem of Degeneracy '+1


2.5.2. Problem of Alternative Solutions 4-2

2.5.1.1. Decomposition of Ii. for Example 2.5.1. 59

2.5.1.2. Problem of Empty Decomposition Polyhedron 4-9

2.5.1.3. Demonstration of Remark 2.3.12. 54-

2.5.1. '+. Redundant Constraint Leads to Nondominated


Solution 62

3.1.1. Construction of -Ii. ex 1) 78

3.2.1. Block Diagram of Approach A 85

3.2.3. Block Diagram of Approach B 88

3.2.3. Graph of Numerical Example 90

3.3.1. Block Diagram of Nondominance Subroutine 96

3.3.1. Block Diagram of Multicrit~ia Simplex Method 99

'+.2.1. Block Diagram of Matrix Reduction 134-


'+.3.1. Situation for Example '+.3.1. 136
x

5.1.1. Noncomparability and Nondominance 14-7

5.1.2. Geometry of Cutting Hyperplane 150

5.1.3. Cutting Hyperplane in Two Dimensions 151.153

5.1.4-. Degenerate image of SeX] 155

5.1.5. Multiple Cutting Hyperplanes 157

5.2.1. A Gap 162

5.2.2. Resolution of a "Gap Problem" 163

5.3.1. Fuzzy reduction of A 170

5.3.2. A Compromise Set 174-

5.3.3. A Method of Displaced Ideal 182

ALL Redundant Constraints Classification 190

A1.2. Dominant Constraints Classification 193


1. Introduction
1.1. The origin of the multiobjective problem and a short
historical review

The continuing search for a discovery of theories, tools and con-


cepts applicable to decision-making processes has increased the complexity
of problems eligible for analytical treatment. One of the more pertinent
criticisms of current decision-making theory and practice is directed
against the traditional approximation of multiple goal behavior of men
and organizations by single, technically-convenient criterion. Reinsta-
tementof the role of human judgment in more realistic, multiple goal
se,ttings has been one of the ma~or recent developments in the literature.
Consider the following simplified problem. There is a large number
of people to be transported daily between two industrial areas and their
adjacent residential areas. Given some budgetary and technological con-
straints we would like to determine optimal transportation modes as well
as the number of units of each to be scheduled for service. What is the
optimal solution? Are we interested in the cheapest transportation? Do
we want the fastest, the safest, the cleanest, the most profitable, the
most durable? There are many criteria which are to be considered:
travel times, consumer's cost, construction cost, operating cost, expected
fatalities and injuries, probability of delays, etc.
It might be an impossible task to tie all these criteria into a
single unifying trade-off function which could serve as the objective
function for the associated mathematical programming problem. It may
-2-

be better for us to look at,such a problem as one with multiple

objectives which are all to be at their "best" possible values under

the given conditions.

Given a vector-valued objective function 6ex) =( 61 (x), ... ,6l (x))

and a set of feasible solutions X ~ En, the vector maximum problem

v-Max 6ex) subject to x E X

is the problem of finding all solutions that are nondominated. Instead

of a single optimal solution we seek a set of "nondominated" solutions

(efficient set, admissible set, Pareto-optimal set). We prefer the

term "nondominated" because of its unambiguity.

The main property of ,the set of nondominated solutions is that for

all solutions outside the set we can find a nondominated solution at

which all objective functions are unchanged or improved and at least one

strictly improved.

It might be worthwhile to support these statements with two quota-

tions related to our subject:

Howard RaiffaY : "Personally I feel that this quest for a "scientific"


and "mathematically objective" rule [decision criterion,
objective function, M.Z.] is all wrong! ...... ; we should
limit formal analysis to the characterization and determi-
nation of the efficient set [set of nondominated solutions,
M.Z.] and let unaided, intuitive judgment take over from
there."

Y Raiffa, Howard: "Decision Analysis", Addison-Wesley, 1970, pp. 155-156.


-3-

John von NeumannY: " .... This [multiple objective situation, M.Z.]
is certainly no maximum problem, but a peculiar and discon-
certing mixture of several conflicting maximum problems ••••
This kind of problem is nowhere dealt with in classical
mathematics. We emphasize at the risk of being pedantic
that this is no conditional maximum problem, no problem of
the calculus of variation, of functional analysis, etc. It
arises in full clarity, even in the most "elementary" situa-
tions, e.g., when all variables can assume only a finite
number of values."
The problem of the formation of a single optimality criterion from
a number of essentially noncomparab1e elementary criteria first appeared
in [Pareto, 1896].
The concept of "Pareto optimality" (here nondominated solutions)
found its way into operations research in the pioneering work of [Koopmans,
1951]. This was, of course;' in connection with the activity analysis of
production and allocation. ,A more general approach, viewed as a vector
function maximization problem of mathematical programming, can be found
in [Kuhn ,Tucker, 1951]. We should also mention the work of [Markowitz,

1956], who applied the concept of a nondominated set in portfolio selec-


tion problems.
After a decade of "nontechnical" discussions of multiple versus
single objective function on pages of Operations Research [represented,
for example, by Klahr, 1958] the direct extensions of Koopmans' ideas

Y von Neumann, J., and Morgenstern, 0.: "Theory of Games and Economic
Behavior", 3rd ed., Princeton University Press, Princeton, 1953,
pp. 10-11.
-4-

appeared in [Charnes,Cooper, 1961]. Here we can find the first algorithmic

approach, the "spiral method", still heavily influenced by the activity

analysis framework.

In recent years the problem of vector function maximization has been

approached from the more general viewpoint of Kuhn and Tucker. Most

authors deal with general optimality conditions. For example, [Zadeh,

1963], [Klinger, 1964], [DaCunha, Polak, 1967], [Geoffrion, 1968].

An algorithm for maximizing two objective functions via parametric

linear programming is presented in [Geoffrion, 1967].


1.2. Linear Multiobjective Programming

In this work we try to extend the theory of vector function

maximization, especially in the direction of algorithmic developments.

We will concentrate our efforts on linear structures only. The reasons

are essentially twofold. First, we have found that the linear case is

sufficiently complex to merit concentrated attention. Though the

interest in the linear case has increased substantially in recent

years, satisfactory algorithms have not been published yet. Also, the

possibility of utilizing the existing concepts of the simplex method

and parametric linear programming is very attractive and, from a prac-

tical viewpoint, most applicable.

Second, nonlinear cases are uncomparably more difficult to solve.

All difficulties of the nonlinear programming theory are compounded here

together with complex characterization of the nondominated set in non-

linear cases. Before the nonlinear case may be approached efficiently,


-5-

more experience is needed not only with multiobjective programming. but


also with interpretations of nondominated solutions. It is due to the
persuasion of the author that the complete nondominated set will not be
of primary interest from the practical viewpoint. A concept of "repre-
sentative" nondominated solutions might be technically. as well as
practically. more attractive.

1.3. Comment on notation


It is important to distinguish the following notation: Let x E En

andy E En. where x = (xl":' .xn ), andy (yl···· 'Yn)· Then

i) x > Y if and only i f x.J > yj' 1 •.•.• n


> 1 •.•.• n and x '" y
ii) x :<: y i f and only if x. yj'
J
(iii) x !1; y if and only i f x.
Yj' ~ l ..... n
J
( iv) x ~ y if and only ifx~yandy~x

( v) x =y if and only if Xj y .• j = l ..... n.


J
-6-

We shall denote a set by a capital character and use superscripts

to indicate the index of a set of vectors. For example, a set of k

vectors will be denoted as

12k
X = {x ,x , ... ,x },

where

i
x l, ... ,k.

12k
Also, given {x ,x , ... ,x } we shall define a convex hull of
1 k
x , .•• ,x as

k
I 2
[x ,x , ... , x ]
k
{xix ~ 0, I A.
i=l 1.

1 k
or, in abbreviated form, as C[X] or C[x , .•. ,x ].

Similarly,

1 i i I i
[x ,x ] = {x + AeX -x ); 0 ~ A ~ l}

indicates a closed line segment or a convex combination of xl and xi.


Ii' 1 i
An open line segment will be denoted as ex ,x ) = {xl. + AeX -x );

o< A < n.
Given a set X, its interior will be denoted by Int X and its

boundary by ax. We will need to distinguish between the interior points

and interior points with respect to the relative topology. A fine dis-

cussion of these concepts is in [Stoer,Witzgall, 1970]. Considering a


-7-

convex polyhedron X ~ En, a point is called a relative interior point


of X if it is an interior point of X with respect to the relative
topology induced by the minimal manifold containing X, i.e., with
respect to the relative topology induced in

M(X) {x Ix [.r Aixi , xi €


r
X, [. Ai n,
i i

the manifold generated by X. We will denote the relative interior of


X as Xl.

For simplicity, we shall often denote a linear combination in

vector notation as follows:

A.,S(x).

Other more specialized notations will be introduced in the text when


needed.
-8-

LINEAR MULTIOBJECTIVE PROGRAMMING I.


2. Basic Theory and Decomposition of the Parametric Space

One of the theoretical approaches which appears repeatedly in the

literature is based on reduction of a vector-valued objective function

to a family of scalar-valued objective functions [see, e.g., Geoffrion,

1968, and DaCunha,Polak, 1967].

The approach works quite well for bicriterial cases [see Geoffrion,

1967]. However, its multicriterial extension has not been successfully

analyzed for an algorithmic development.

In this section we shall explore multiobjective linear programming

problems through multiparametric programs.

Let us denote a set Qf l (possibly incommensurate) objective func-

tions as

l
e (x) (e I (x), ... , e (x))

where eex) is a vector-valued objective function, x E En is the decision

variable, and let X C En be a set of all feasible solutions.

The vector maximum p.roblem

v - Max e(x) subject to x E X (2-1 )

is the problem of finding all points x E X which are nondominated,

that is, there exists no other x E X such that e(x) ~ e(x) and

e (x) ;t e (x) .
We may denote the set of all nondominated solutions as N and define
v X- N as the set of all dominated solutions. It is seen that
- 9-

X E N <=> e Cx) ~ e (iC) => S Cx) e Cx). C2-2)

Definition 2.1. Let A be the set of all vectors A, defined as follows:

.e.
A {I
AA E E.e. ,A. ~o , L\ A. 1, i 1, ... ,D. C2-3)
l. i=l l.

Defini tion 2.2. Given A E A, let P CA) denote the following problem:

.e. .
Max L A.el.cX)
XEX i=l l.

or find a point x E X such that

A.ecX) f A.ecX) for all x € X.

Definition 2.3. Let

L {xl x E X, x solves P CA) for s;are A € A},


(L) {x Ix E X, x solves P CA) for s:rre A E lnt A}.

Remark 2.4. The conclusion (L) ~ N ~ L allows one to solve (2-1) by


means of PCA). It has been proven repeatedly in the literature:

(a) CL) ~ N [e.g., Geoffrion, 1968, DaCunha,Polak, 1967].

Cb) e[X] convex => N c L [Arrow, Barankin,Blackwell, 1953].

(c) X closed and convex, Sex) concave for all x E X, one component

of eex) strictly concave for x E X, then N c L [DaCunha, Polak,

1967].
-10-

The conclusion of Remark 2.4. is explored and derived in complete

generality by (Yu, 1971,1972]. Some of his results are discussed in the

next section, 2.1. Notice that 8[X] = {8Cx)\x E X} is the image of X

through 8. Observe, when X is a pOlyhedron and S is linear, then S[X]

is also a polyhedron. Thus CL) ~ N ~ L. By solving PCA) for all A E A,

we can get a set of solutions containing the entire set N.

Remark 2.5. N may not be equal to L. This can be easily resolved by

considering the following lemma.

Lemma 2.6. If for some A E A, x is the unique solution of PCA), then

x E N n L.

£!£Q£. Suppose X E V. Then there exists x E X such that Sex) ~ Sex).

Since A ~ 0, A.SCx) f A.S(x). Thus x cannot uniquely solve peA), a

contradiction. Q.E.D.

The dominated solutions contained in L occur only when there are

more alternate optimal solutions to PCA) for some A E A. By comparison

on objective space S[X], the dominated solutions in L could be discarded.

2.1. Basic Theory-linear case

The following is based on some results described in [Yu, 1971,1972].

In particular, the following theorem will be useful:

Theorem 2.1.1. Let cx = ( c l . x, ... ,c.e.. x ) be an .e.-dimensional linear

vector function, X c En be a convex polyhedron (set of feasible solutions),


-11-

.e.
A E E and let the set of x E X which maximizes the function A.CX

over X be denoted by X* (A). Then

(a) U X* (A) c Nc U X* (A)


A>O A~O

(b) If C[X*(A)] is a singleton for all A, A not strictly larger

than 0, then N U X* (A).


A~O

Remark 2.1.2. Notice c[X] = e[X] for e(x) cx.

Remark 2.1.3. U X*(A) and N~


Let N> = U X* (A). Then the (a) part of
A>O A~O
Theorem 2.1.1. is reduced to

Because of the above inclusions, we could call N> and N~ the inner and

outer approximation of N. Notice that the statements (L) ~ N ~ L of


Remark 2.4. and N> ~ N ~ N-> of this Remark, are essentially equivalent.
In N> and N~ we use positive cone A> and non-negative (non-zero) cone
> > >
A-. In (L) and L we use a hyperplane of A and A- to represent them.

Under the maximization, both approaches yield the same solution. The

approach using (L) and L allows us to reduce the dimensionality of A

by one, as discussed in section 2.2.

The following Figure 2.1.1. should clarify Remark 2.1.3.


-12-

Any A E A~ may be represented


in A as in Figure 2.1.1. A-
space itself may be reduced
to the shaded two-dimensional
polyhedron.

Figure 2. 1. 1.
;\3
Remark 2.1.4. In general, the nonempty polyhedron X c En is defined by

X (2-1-1)

where b is a fixed vector in Em+n, and A is a given fixed (m+n)xn matrix.

Given A ~ 0, a point ~* E X is a solution of Max A.CX if and only


XEX
if there exists a multiplier ~ E Em+n , ~ ~ 0, such that

Ax* ~ b

~.A + A.C = 0 (2-1-2)

~. (Ax*-b) = O.

Notice that the definition (2-1-1) differs from that in (2-2-3). Also,

the non-negativity constraints are incorporated in general inequality

constraints.

Let the set of active constraints at x* be denoted by

R(x*) (2-1-3)
-13-

th
where Ar is the r row of A, and R = {l, •.• ,m+n}.
We may distinguish two cases:
(1) R(x*) ~. Then x* € Int X, P = 0, A.c = 0

(2) R(x*) ~~. Let PR(x*) and ~(x*) be the vectors derived
from P and A by deleting all components not in R(x*).
Then from (2-1-2), we get

or
(2-1-4 )

where

Let us define

H> {hlh = A.c, A > O}


H-> = {hlh = A.c, A ;;, O} (2-1-5 )
>
G(x*) {hlh = PR(x*)'~(x*)' PR(x*) OJ.

From (2-1-1), (2-1-2), and (2-1-4) we have:

Theorem 2.1.5. Given the conditions stated in Theorem 2.1.1., we have

(a) x* E N> if and only if 0 € H> or H> n G(x*) ~ ~

(b) x* € N;;' if and only if 0 € H;;' or H;;' n G(x*) ~ ~.

Remark 2.1.6. H> and H;;' are the positive and semipositive cones generated

by the gradients {c 1 , •.. ,cl }, and G(x*) is the non-negative cone defined
-1lJ.-

by the gradients {Arlr € R(x*)}. i.e •• by ~(x*)' Given N> and N~ it


is then easy to identify N from the definition of nondominance or from
Theorem 2.1.1. (b).

Remark 2.1.7. If 0 € H> => X € N for all x € X.

2.2. Reduction of the dimensionality of the Parametric Space


By fixing (l-l) parameters A. in A (Definition 2.1.) the lth para-
~

meter is automatically determined because of the normalization condition.


So we can decrease the dimension of parametric space A by one. For sim-
plicity. let us consider (l+l) objective functions to begin with.
Let

i l ..... (bl).

Then

l+l. ~ l+l l .
A.ex = l A.e 1 .x = (1- ,A.)e .x + l A.e~.x
i=l ~ i=l ~ i=l ~

l .e. I l. .e. 1 .e. i l+l


[(1- l A.)e + + l A.e~].x = [e + + l X. (e -e )].x.
i=l ~ i=l ~ i=l 1

i .e.+l
To simplify our notation. let (e -e ) c i for i 1,2, ••. ,l. Then

f
(el+l +, X.C i ).x = L ().'.+l
~
i
+ L X.c.)x. ¥ f (2-2-1)
i=1 ~ j=1 J i=l 1 J J

.e.+l
Set (e + I X.c )
i
c(X). Then
i=l 1
-15-

n
c (A). x I c. (A)X. (2-2-2)
j=l J J

Observe that PA is a bilinear function defined on X x A. By fixing

A* E A we obtain a linear function PA* to be maximized over X.

Let X be defined by

n
X { x Ix E En; I a .x . b . x. ~ 0; r
r' J
= 1, ... ,m;
j =1 rJ J
(2-2-3)
1, ... ,nl.

We shall assume 1 ~ m < n, X ~ ~, and the rank of [a


rJ
.J is m.

Thus X is a convex polyhedron. Notice the equality constraints in

X. (All slack and surplus variables are included).

Convention 2.2.1. To avoid a separate treatment of degeneracies, let us

agree that a different basic feasible solution means a different extreme

point of X. Let Nex denote all nondominated extreme points of X.

2.3. Decomposition of the Parametric Space as a Method to Find Non-


dominated Extreme Points of X.

Let

0; I A. < l}.
i=l 1

For a fixed A* E A, let XO(A*) be a maximal solution of Max PA*,


XEX

where PA* is given by (2-2-1) and by (2-2-2) with A = A*. For simplicity

we shall use only XO to represent XO(A*). Similarly, we shall use c* to


-16-

represent C(A*).

We shall also denote as J the index set of the basic variables,

while J will indicate the index of currently nonbasic variables. Note

J u J fl, ... ,nL


For simplicity, let J {l, ... ,m}. Then a general simplex tableau

for XO is:

c *I ... c*
m
*
cm+l . .. c.*
J
... c*
n
r Basis c* bO xl ... x
m
xm+l .. . x.
J
" . x
n
c* 0
... .. . .
I xl I YI I
° Ylm +l Ylj " YIn

c*
0
... .. . ...
m x
m m Ym
° I Ymm+l Ymj Ymn

* 2* .. . 2.* ... 2*
:4
0
° ...
'
° m+l J n

Table 2.3.1.

o 0
If x is nondegenerate, then x = {xl, ... ,xm' O, ... ,O}, where

x.
J
Y~
J
> ° for j E J and x. = 0 for j
J
E J.
Looking at the tableau we see that

2.* z.* c.* (2-3-1)


J J J

where

z.* I, ... ,n
J
-17-

and

m
z*
o
I * 0
r=lcr>'r

Observe that the optimality condition for XO to be a maximal solution

for PA* is

2.* z.* c~ ~ 0 for l, ... ,n. (2-3-2)


J J J

Notice if XO is unique or if A~ > O. i = I •...• !. then XO € N.

Remark 2.3.1. Notice that 2.* may be expressed by


J

m
2. (A*) = I c (A*)Y • - c. (A*).
J r=l r . rJ J

using the notation of section 2.2.


! 1 ! * .
Substituting C(A*) = (e + + I A.C 1 ) we get:
i=l 1

!
2. (A *) - 2.* ~l. (~!+l ~ +
~l. ,* i) y.
I\.C + I
*.
A.C:)
J J r=l r i=l 1 r rJ i=l 1 J

(2-3-3)

Observe. if for each A € A we could find XO(A) or show that PA has an


unbounded solution. according to Theorem 2.1.1 •• essentially all the
N-points have been located.
-18-

Substituting in (2-3-3)

m
Yj - L ./+1 Yrj - c..1:'.+1 and
r=l r J

m
c~ - i
J L c!Yrj
r=l
c.
J

we get

.e. * .
z. (1.*)
J
z.*
J
y. + L A. 0: ~ 0, j E J (2-3-4)
J i=l 1 J

for the optimality conditio~s


*
expressed as a linear function of Ai'

Observe that for j E J, the index associated with basic variables,

z.(A*) = O. Notice also that 2.(1.*) is a linear function of A when-


J J
ever x is fixed at xo. So, the set z. (1.*) ~ 0, j E y, generates a closed
J
convex polyhedron A(xo) in A, such that for each A E A(xo), xo is the

maximal solution to PI.' This observation yields:

Theorem 2.3.2. Let 1.* E A and let Xo solve

Max PA* .
XEX

Let

{AIAEE;y.+
.e. .e.LA.c.i >
O;jEJ}. (2-3-5)
J i=l 1 J
-19-

o 0 0
Then A* E A(x ), and the extreme point x solves Max P A for all A E A(x ).
XEX

Remark 2.3.3. Let us reemphasize that XO represents a basis rather than

an extreme point. So two bases corresponding to the same extreme point

are considered to be different extreme points.

Notice the following properties of A(xo ) which will be useful in

further developments:

1) A E Int A(x o ) and o~ ~ 0, for all j E J and all i=l, . .. ,f,


J
imply that XO is the unique solution of PA•

2) A E oA(x o ) => there is at least one alternative optimal solution

(other than xo ) to PA.

3) If y . ;;
rJ
° for all r and z . (A) =
J
° for I E oA(X°)' then
, 0
a polyhedron adjacent to A(x ) may be constructed such that

it corresponds to an unbounded solution.

The following Theorem 2.3'.4., can clarify (3) of Remark 2.3.3 ..

Theorem 2.3.4. Let xO E X be an extreme point of X such that A(Xo ) n A ~ $

and A <f A(xo ). Let I E oA(xo ) and k E J be such thatZ k(I) = ° and 2(A) is
<
a nonzero function of A. If all the elements of the pivot column k, Yrk = 0,

r E R, where R = {1,2, ... ,m}, then there exists in Ef an unbounded polyhedron

Ak such that

(i) A(x o ) n Int Ak $

(ii) A(x o ) n Ak ~ $

(iii) for A E Int Ak the solution to Max PA is unbounded.


XEX
-20-

~. Since T E A(xo ), XO is a maximal solution to PT' which is defined


by (2-2-2) for A = T. The corresponding optimality conditions (2-3-4)
can then be written as

l
y. + l.\' '1\.
J
~l. >
\ u.
i=l l. J
.
°, J' (2-3-6)

Let us consider the kth column.

·Recall that all the elements of the pivot column Yrk < 0,

r E R, where R = {1,2, ... ,m}. Let

Then for each A E lnt Ak there is no bounded solution to PA

on X. Note Ak is an unbounded halfspace. Since for all


A E A(x o )

l .
Yk + L A.O~ > 0,
i=l l.

and for all A E lnt Ak

l
Yk + L A.O~ < 0,
i=l l.
-21-

we have

Note, X- E Ilk' Thus

- 0 o
A E I1(x ) n Ilk' i.e. , II (x ) n Ilk '" cj>. Q.E.D.

Given XO E X, let I1(x o ) be constructed in the sense of Theorem 2.3.2.


Let J o = {jl,j2, ... ,jm} be the index set of basic columns and the index

set of nonbasic columns with respect to J o be denoted by

Jo = {jm+l,jm+2" ··,jn}· Then

i
{AlA E Ei. y. + L
A.O~ >
0; E J }.
o
(2-3-7)
, J i=l]. J

th
Let the k column represent the pivot column, and Ypk is the pivot element

chosen. The following simplified simplex tableau describes the situation:

Basic Cols. ,: Nonbasic Columns


,,
J ,, J0
0
+ ,, ..
pth ,, .th
J
kth bO
0

,,,
-- ----0---- ----Ylj---------Y lk ---- Yl
, , ,,
-- ----1----
, ----~Pj--------G --- ,0
Yp
,
,, ,, ,, ,
,0
-- ----0---- ----Ymj---------Ymk---- Ym
,
------0----: ---z. (A)-------z (A)--- z
'J k 0
where zo is defined by (2-3-1) and 2j (A) by (2-3-4).

Remark 2.3.5. Notice that A(xo ) of (2-3-7) may be rewritten simply using

(2-3-4) as

(2-3-8)

1 1
After the simplex iteration we get x and A(x). Notice that p E Jo

is the leaving column vector. We shall denote the new basic and nonbasic

columns by:

J u {k} {p} (2-3-9)


o

and I u {p} lk}. (2-3-10)


o

By simplex method (or by Gaussian elimination technique) we get

-1
21 (A) = y'Zk(A), p (2-3-11)
p
E II
pk

Y .
and Z~ (A) z. (A) - ~Z (A) E II - {p} (2-3-12)
J J Ypk k '

Let xl be the solution corresponding to the basis J 1 . Then

A(x 1 ) = {I
AAE E.t ; Zj1 (A) =
>
0; J. E
-}
J1 . (2-3-13)

Note ypk > O. Then (2-3-11) and (2-3-12) can be rewritten as they appear

in (2-3-13):
-23-

Zk (A) ~ 0 for j = p (2-3-14)

y .
and Z.(A) - ~·z (A) ~ 0 for (2-3-15)
J ypk k

Note, k E 30. Then (2-3-8) and (2-3-14) imply that

(2-3-16)

is the separating hyperplane for A(xo) and A(x1 ) respectively, so that

A(xo) C {AIAEE.t; Zk(A) 1; o} (2-3-17)

and A(x 1 ) C {AlA E l:~ Zk(A) :Ii a}. (2-3-18)

Clearly the following is tr.ue:

Int A(x0 10
) n A(x ) = A(x ) n Int A(x 1 ) = ~.

If A E Bk then Zk(A) = O. For example, for


A* E
°
aA(x) and A* E Bk' *.t * i
~(A)= Yk + i~IAi(\ = O.
Then from (2-3-8), (2-3-14) and (2-3-15) we have

(2-3-19)

(2-3-20)

because 3 1 = Jo u {p} - {k} (see (2-3-10)).


Then from (2-3-19) and (2-3-20) we have
-24-

Hk n A(x1) = Hk n A(x0 ) n A(x1).


Thus

Hk n A(xo ) = Hk n A(x1 ) c A(x0 ) n A(x1). (2-3-21)

On the other hand, since Hk is the separating hyperplane for A(xo ) and
A(x l ) we have

A(xo) n A( x1) cHkn A( x1) (2-3-22)

From (2-3-21) and (2-3-22) we get

The above discussion may be summarized by the following theorem:

Theorem 2.3.6. Let XO be a basic feasible solution associated with a


basis J o and let A(xo ) be constructed according to Theorem 2.3.2. For
A* € aA(xo ) there is at least one k € 30 for which 2 k (A*) = O. Define

Introducing the kth column into the basis according to Theorem 2.3.4.
two possibilities can be distinguished:
< o
(1) all Yrk = 0, r € Rand Ak is constructed. Then A(x ) n Ak

Hk n A(xo ).
(2) let Ypk > 0 be the pivot element. Then a new basis

Jl = Jo u {k} - {p} is constructed, associated with basic


-25-

feasible solution xl. Also A(xl) is constructed. Then


o 1
(a) ~ separates A(x ) and A(x ) so that
A(x o ) C {AlA eEL; 2 k (A) ~ O} and

A(x 1) C {AlA EEL; 2 k (A) < a}.

(b) A(x o ) n Int A(x 1) = Int A(xo ) n A(xl) = ~.

~.

For (1). According to Theorem 2.3.4.

A(xO) n I nt Ak =, 'Y.
'"

Then,

A (xo) n Ak =

For (2). The parts (a), (b), (c) have been shown by (2-3-17),

(2-3-18), and (2-3-21) and (2-3-22) respectively. Q.E.D.

Remark 2.3.7. The following Figure 2.3.1. shows the relations of Theorem

2.3.6. in the two-dimensional case.


-26-

Case (1) Case (2)


Figure 2.3.1.

In Figure 2.3.2. some three-dimensional representations of Theorem

2.3.6. are analyzed.

In.U"k.
-27 -

In Figure 2.3.2. notice that for ~ E 3A{xo ) by choosing k or k


both ~
1
and A(x ) are constructed.
0 0
We see that A(x ) n Ak = A(x ) n Hr and
o
A(x ) n A(x )
1
=
0
A(x ) n Hk =
1
A(x ) n~. Notice also that at X
-
E A(x )
0

the A(x 3) cannot be constructed in the adjacent sense because only two
3 0
hyperplanes pass through X. However, A(x ) can be constructed from A(x )

if there would be another effectively bounding hyperplane through X, as

it is demonstrated in the numerical example. (Tableau 2-5-1-10)

Definition 2.3.8. Referring to the theory of simplex method, for E J.


if Yrj > 0, we define

6.
J
Min{y~
r y rj
; y '. >
rJ
ol.J (2-3-23)

According to the simplex method theory, given A* E A, the problem

of maximization of PA* (see (2-2-2)) has two possible outcomes:

(i) there is a maximal solution. In this case there exists a

basis J such that all 2.(A*) > 0 for all j E J.


J
(ii) PA* has an unbounded solution. Then there exists a basis J

such that there is at least one column k E J such that


Yrk 2 0 for all rand 2k (A*) < O.

In the case (i) we could construct A(J) such that A* E A(J) and J

is the optimal basis for all PA, A E A(J).

In the case (ii) there is an open halfspace ~ defined by 2 k (A) < 0

such that A* E Ak and if A E Ak the PA is unbounded.


-28 -

Thus, it is seen that each A E A is covered by either a polyhedron

A(J) or by an open halfspace Ak . Since the number of bases is finite,

they produce only a finite number of polyhedra A(J) and open halfspaces

~. This shows that A can be covered by a finite number of polyhedra

and open halfspaces. The above discussion may be summarized by:

Theorem 2.3.9. (1) Maximization of PA for all A E A produces a finite

covering of A.

(2) There are no "holes" in the covering of A.

(3) The polyhedra corresponding to a bounded solution


form a convex set.

(4) There is no "barrier" problem which would prevent

us from reaching adjacent polyhedra.

££2Qi. Parts (1), (2) and (4) follow directly from the previous discus-

sion. For the part (3):

The set '"A {AlA EEl; a solution to Max PA is bounded}


XEX

may be expressed by

'"A complement {UA k }


k

n{complement Ak }
k

which is a convex polyhedron. Also, A n '"A is a convex polyhedron. Q.E.D.


-29-
Before discussing the algorithmic implications in section 2.4. we

shall state another theorem which will help us to decide the nondominance

of a given basic solution.

Definition 2.3.10. Let

t t <
Int A {AlA E E , Ai > 0; I A. I} (2-3-24)
i=l 1

In view of the notation used in Theorem 2.3.6., let us state the following:

Theorem 2. 3.11. 1
.
x , constructed by intro-

ducing the kth column into the basis, is a nondominated solution.

~. We assume from the statement of the Theorem that XO E N. Since


t .
Yk + .I A.o~ ~ 0 is a binding constraint for A(x o ), then
1=1 1

according to Theorem 2.3.6. Q.E.D.

Remark 2.3~2. If 1\ n A(xo ) nht A=~ then xl may not be a nondominated


solution. An example of Remark 2.3.12. is given in section 2.5.1.

2.4. Algorithmic Possibilities.

We have started with the fact that any nondominated extreme point

xj E N can be obtained by solving Max PA for some A E A.


XEX
To each x j E N the corresponding polyhedron A(x j ) therefore must
have at least one point in common with A. (See also Theorem 2.3.11.)
-30-

Remark 2.4.1. We would like to be sure that a set of all nondominated


extreme points is a "connected set." If the set of such points is not
connected. then there are at least two different nondominated extreme
points which cannot be reached one from the other through a series of
adjacent nondominated extreme points.

Theorem 2.4.2. A set of nondominated extreme points is a "connected


set."

~. The proof follows directly from Theorem 2.3.9. and Theorem 2.3.1l.
Let XV and XW € Nex Then each of the corresponding A(xv ) and A(xw)
has at least one point in common with A.
v v 'w w
Choose A € A(x ) n A and A € A(x ) n A. Because of the convexity
of A we can write:

[AV.A W] € A.
We know also that the line segment [AV,AW] is contained in
the union of all polyhedra which are associated with bounded solu-
tions. Because of a finite covering of A, we can select a finite
sequence of distinct polyhedra {A (xv). A (xV+1), ... , A (xV+~ }, such
v+k w v+i v+i+l
that A (x ) = A (x ). Furthemore, A (x ) and A (x ) are connect-
ed in the sense of Theorem 2.3.6. Q.E.D.

Figure 2.~.1. provides graphical interpretation of Theorem


2.~.2.
-31-

Figure 2.4.1.

Consider some xO E Ne~d corresponding A(xo ). Let the corresponding


basis of XO be J.
The goal is to construct all adjacent polyhedra to A(xo ). By con-
sidering all k E
-
J (for which 2 k (A) = 0, 0
for some A E A(x )) the corre-
sponding columns may be introduced successively and all required polyhedra
constructed.
We would like to limit our choice of k E J, however, only to such
nonbasic columns which, being introduced, would not result in
(a) a polyhedron already computed.
(b) a polyhedron which has an empty intersection with A (since it
corresponds to a dominated solution).
Graphically, we mean the following (see Figure 2.4 .2.) :

o o
Figure 2.4.2.

Shaded polyhedra are A(xo ). Crossed are the faces (and their correspond-

ing nonbasic columns) which are not to be introduced in the basis. Notice

that in the second example we could actually move to a nondominated basis,

but this is also reachable without leaving A.

To determine whether a particular nonbasic column should go in the

basis we solve the following (see Theorem 2.3.11.):

.e. i
Yk + I
i=l
A. Ok
l.
0 k E J

.e.
Yj + I A.Ot
>
0 j E J-k (2-4-1)
i=l l.

.e.
l A. < I
i=l l.
-33-

If the system (2-4-1) has a feasible solution, then the introduction of

the kth column will result in a polyhedron having a nonempty intersection

with A.

Using the described multiparametric approach, we have basically

two strategies available to generate a set of nondominated extreme points

(more exactly, nondominated basic solutions). Both strategies assume an

availability of some efficient method to generate adjacent extreme

points.

Simple block diagrams for both strategies follow in Figures 2.4.4.

and 2.4.5.

Remark 2.4.3. Notice that using the Strategy II we construct all adjacent

polyhedra to A(x i ) for which ' the intersection with ]KAis nonempty. For

that we do not need to find all the extreme points of A(x i ). We consider

only all faces of the highest dimension which, in turn, correspond to

nonbasic columns of the associated simplex tableau. From these we select

only those which have required properties (the face intersects with1nt A,

it does not lead to a basis already considered, etc.). The calculation

of all extreme points might be redundant or insufficient in different

situations. For example, let A(x i ) be represented by two polyhedra in

Figure 2.4.3.
12 extreme points 13 extreme points
8 faces 13 faces
Figure 2.4.3.

From the previous dis 7ussion we may conclude that the available

algorithmic possibilities would not lead to an efficient method, if

finding all nondominated extreme points is our primary goal.


-35-

STRATEGY I

( Start ')
1 !
For AiEl\. construct P •
Al
and solve M~ p Al•


xX
2

I
Find xi and construct I\. (xi)
I
3 ~ NO I
Is I\. (x i ) ' n Int xi E N I
r
A=~ ? I
s
i=i+l I xi E V I
lj.
1
Generate an adja~ent basic (

solution X1 +l

5
NO Is l1 [I\. (xi) n I\.J = A?
1 YES

~
Figure 2.4.4.
- 36-

STRATEGY I
(Comments)

1. It is convenient to start with Ai = O. Then A1.+1 = 1 and we


solve Max c1.+1 .x. If the solution is unique, we start with
XEX

2. Construct A(x i ) in the sense of Theorem 2.3.2. The con-

struction of A(x i ) might be simplified by following the

approach in Remark 2.5.1.

i i
3. To show that A(x ) nlhtA = </> => x E V let us state the
following:

(a) A(xl) n A =, </> implies clearly x1 E V.


(b) A(xl) n A i <j> and A(xl) n IntA= </> imply that there
1
is some AE ClA at which an alternate solution to x
0
say x , may be constructed such that: (See Theorem

2.4.2.)
AE A(x o) and A(x o) rilhtA i </>,

A E A(x o ),
0
i.e. x E N. Thus, for all we have

L cx 0 ~ A.cx 1
- 0 - 1
and Lcx A.cx
Since A(xl) nlhtA= </> there is no A E A such that

Notice Int A(xl) n A = </>. Thus, for

all A E A,

We may distinguish four possibilities:


1 0
(i) cx 2: ex
1
(ii) cx s cx0
-37-

(iii) cx l cx o
(iv) cx l 'I> cx o
The possibility (i) must be excluded by definition of x € N.
1 0
Possibility (iv): cx 'I> cx implies there is at least one
i = I, ... ,!, with i I> i
c.x c .x0 • Choose A € A such that
all A. > i I> i
0 i f c.x c .x0 and A. = 0 otherwise. Then
J. J.

A.cx l > A.CXO, a contradiction. So, only cx l ~ cxo or


cx l = cxo are possible. If cxo = cxl, then A(x o) £ A(x i )
(degeneracy, see Remark 2.5.1.). If cx o ~ cxl, then xl € v.
4. We generate an adjacent basic solution to xi, say x i + l We
must choose the method which would lead us only to an as yet
unexplored basis: Any of the two approaches discussed in
Section 3.3. would be applicable here.

5. We must check whether A space has been completely decomposed by


A(x i ) generated up to this point. Direct application of this
criterion would lead us to nontrivial computational difficul-
ties. However, because of "connectedness" and finiteness of
the decomposition (see Theorems 2.3.9. and 2.4.2.) whenever
we cannot find an unexplored adjacent basis to any x i 's gener-
ated up to this point, such that A(xi ) runtA ~ ~, then the de-
composition is completed. In the view of the point (3) above,
the criterion may be
Int II.

for the linear case.


-38-

STRATEGY II

1
i
,
Start)

Find xJ.e N by solving Max P •


for :>..i=(O 0) ... xEX :>..~
I
2
I Construct A (Xi) I
Find all kEJwhich ge~erate ~
strictly b:i-nding A (x~ and
~nA (x~) n Int A '" jf

Store all such k's


which lead to unexplored bases

I i=i+1 I
5
YES
Is ~ [A (xi) n A ] = A titop )
?
NO

6
Find X~T.L E N by introducing
k E J into the basis

Figure 2.4.5.
-39-

STRATEGY II
(Comments)

The steps (1) and (2) are the same as for STRATEGy I.

3. In this step we use conclusions of Theorem 2.3.11. Now we have


to explore all k € J by solving the problem (2-4-1). Notice
that simultaneously we determine whether the constraint 'corre-
sponding to k € J is effectively binding A(xi ). Thus we make
only the transformations leading to nondominated solutions.
Since the set Nex is connected, we can always make the tra-
versa!.
4. We always choose k ~ J which would lead to an adjacent basis.
If the only unexplored bases are not adjacent to xi, we choose
the "next closest", etc., in the sense of the distance between
bases defined in section 3.3.
5. The same comments as for (5) of STRATEGY I apply here; i.e., if
the storage in (4) is empty, we stop.
6. Obvious.
~o-

2. ~ Discussion of difficulties connected with the decomposition method.

In this section we shall discuss and give examples of some diffi-

culties which might result in an inefficiency of the decomposition tech-

nique.

1. Degeneracy. The one-to-one correspondence between an extreme point

x j and A(x j ) can be destroyed. For example, in Figure 2.5.1.to Xo both


0' a"
A(x ) and A(x ) may be computed. This might result in that although

all N-points have already been discovered, the A may not be fully decom-

posed.

2. Redundant constraints. Redundant constraints of A(x j ) may cause

difficulty. Introduction of corresponding columns may lead to V-point

as well as to N-point. However, an effectively binding constraint of

A(x j ), say k th , for which ~ n A(x j ) n llitA~ ~ always leads to N-point

(see Theorem 2.3.11.). To use this fact we should have an efficient

subroutine to identify nonredundant constraints (see, for example,

(2-4-1)). Consult also Appendix Alan page 187.

3. Alternative solutions. The one-to-one correspondence between basis

x j and A(x j ) may be destroyed. That is, for all {x j } such that {x j }

are optimal solutions to PA for all A E A(x j ) the A(x j ) are identical,

independent of xj. This would imply that although A has been fully

decomposed, we still may not have all N-points.


Remark 2.5.1. The above difficulties are consistent with Theorem 2.1.5.
If we assume 0 t H~ then

<=>

Then in degenerate cases there are two or more bases associated


with the same extreme point xj. Notice that G(x j ) is uniquely defined.
independent of different bases. This means that if xj (l) •.••• xj (r)

are the bases associated with xj. then there is a possibility that
. (k)" > .
A(xJ ) c A(x J ). where A(xJ ) is a representation of H- n G(xJ ) in A.

See Figure 2.5.1. for graphical interpretation.

Notice A(x j ) = UA(xj (k))


k
k = 1•...• r

Figure 2.5.1.
-t.J.2 -

Also, the problem of alternative solutions may be explained in

terms of Theorem 2.1.5. We shall use Figure 2.5.2. to explain.

c,·

Figure 2.5.2.
;.
Notice that though X is three-dimensional, H may be two-dimensional.
1 2 1 ;. 2 ;.
Then, although G(x ) ~ G(x ), we may see that G(x ) n H G(x) n H

which implies that A(xl) = A(x 2) ~~. Also, A E A(x l ) n A(x 2) implies
1 2
that x ,x are two alternative solutions of PA'

2.5.1. Some numerical examples of the difficulties.

We will introduce a more complex numerical example for the demon-

stration. Let us consider the following problem:


-4-3-

v-Max CX, where cx =( c1 .X,C 2 .X,C 3 .x), where

1
c .X = x2 + x3 + 2x 4 + 3x S + x6
2
c .X = xl + x3 - x4 - x6 - x 7
3
c .X = xl + 2x 2 - x3 + SX 4 + 2xS + x7

subject to
<
xl + 2x 2 + x3 + x4 + 2xS + x6 + 2x 7 16

<
-2x l - x2 + x4 + 2xS + x7 16
<
- xl + Xs + 2xS - 2x 7 = 16
<
x2 + 2xS - x4 + Xs - 2x6 - x7 = 16
x. ~ 0, i
1.
= 1, ... ,7
Let

Pi\ xl + 2x 2 - x3 + 3x4 + 2xS + x7

+ i\l (-xl - x2 + 2x3 - x4 + Xs + x6 - x 7 )

+ i\2(-2x 2 + 2xS - 4x4 - 2XS - x6 - 2x 7}

The objective function coefficients are given by the following

table:
-lj.lj.-

j 1 2 3 4 5 6 7

3
c. 1 2 -1 3 2 0 1
J
1
c. -1 -1 2 -1 1 1 -1
J
2
c. 0 -2 2 -4 -2 -1 -2
J

Let us start with AO 0, i. e.,

PAo = xl + 2x 2 - x3 + 3x4 + 2xS + x7

is to be maximized. The initial tableau is:

xl x2 x3 x4 Xs x6 x7 Yl Y2 Y3 Y4

Yl 1 2 1 CD 2 1 2 1 0 0 0 16

Y2 -2 -1 0 11; 2 0 1 0 1 0 0 16
'-
Y3 -1 0 1 0 2 0 -2 0 0 1 0 16

Y4 0 1 2 -1 1 -2 -1 0 0 0 1 16 (2-5 -1-1)

-1 -2 1 -3 -2 0 -1 0 0 0 0 0

1 1 -2 1 -1 -1 1 0 0 0 0 0

0 2 -2 4 2 1 2 0 0 0 0 0

Introducing the fourth column we obtain the following solution xo :


-45-

x4 1 2 1 1 2 1 2 1 0 0 0 16

Y2 -3 -3 -1 0 0 -1 -1 -1 1 0 0 0

Y3 -1 0 1 0 2 0 -2 0 0 1 0 16

Y4 1 3 3 0 3 -1 1 1 0 0 1 32
(2-5-1-2)
2 4 4 0 4 3 5 3 0 0 0 48

0 -1 -3 0 -3 -2 -1 -1 0 0 0 -16

-4 -6 -6 0 -6 -3 -6 -4 0 0 0 -64

°
x (0,0,0,16,0,0,0,)

Next we calculate the corresponding A(xo):

<
4"2 = 2 l. +

<
"1 + 6"2 4 2.

<
3"1 + 6"2 = 4 3.} +
A(xo)= 3"1 + 6" 2 ~ 4 5.

2"1 + 3" 2 ~ 3 6.

<
"1 + 6"2 = 5 7.
<
"1 + 4"2 = 3 8.

Notice that x ° is a degenerate solution. Let us explore the situation when

the cell (2,4) of Tableau (2- Sol-I) is the pivot element. We get:
-46-

Y1 CD 3 1 0 0 1 1 1 -1 0 0 0
x4 -2 -1 0 1 2 0 1 0 1 0 0 16

Y3 -1 0 1 0 2 0 -2 0 0 1 0 16

Y4 -2 0 2 0 3 -2 0 0 1 0 1 32
(2-5-1-3)
-7 -5 1 0 4 0 2 0 3 0 0 48

3 2 -2 0 -3 -1 0 0 -1 0 0 -16
8 6 -2 0 -6 1 -2 0 -4 0 0 -64

Remark 2.5.1.1. Notice that Tableau (2-5-1-3) represents degenerate


solution. Because Al and A2 are considered zero, the first criteria1

row indicates that we have to introduce the first column before the

corresponding polyhedron can be calculated. We get the following:

1 1 1 1 1
xl 1 1 0 0 -3 0 0 0
3 3 3 3
2 2 5 2 1
x4 0 1 3 1 2 3 3 3 0 0 16
3
4 1 5 1 1
Y3 0 1 3 0 2 3 -3 -3 1 0 16
3
8 4 2 2 1
Y4 0 2 0 3 -3 0 1 32
3 3 3 3
(2-5-1-4)
10 7 13 7 2
0 2 0 4 0 0 48
3 3 3 3 3
0 -1 -3 0 -3 -2 -1 -1 0 0 0 -16
14 5 14 8 4 0
0 2 -3 0 -6 -3 -3 -3 -3 0 -64
47-

The corresponding set of constraints is:

Al - 2A2 ;l 2
14 < 10
3Al + 3"A2 =3"

3Al + 6A 2 ;; 4 +

2Al + '3A2 ;; '3


A(xo) 5 7

14 < 13
Al + 3"A2 = 3""

Al + ~A ;; 7
3 2 '3
4 ;; 2 +
'3A2 '3

Notice that in both sets of constraints A(xo) is determined by

3Al + 6A 2 ;; 4
A ;;.!.
2 2

which are the only nonredundant constraints. Note, A(xo) = A(xo) n A is


given by

3Al + 6A 2 ~ 4 3.

3Al + 6A 2 ;; 4 5.
A(xo) A ;l.!. 1.
2 2
Al + A2 ;l 1
Al ,A 2 ~ 0
-48 -

This polyhedron A(xo ) is graphically represented in Figure 2.5.1.1.

Looking at original Tableau (2- 5-1-2) we see that the first and the

third or fifth columns may be introduced. Before we do this, let us

make the following remark.

Remark 2.5.1.2. The problem of redundant constraints is a very important

one. Returning to Tableau (2-5-1-2), let the second column be introduced

into the basis. Notice the 2nd column is corresponding to the redundant

constraint Al + 6A 2 =< 4 of A(x).


0
We get tableau (2- 5 -1-5):

1 1 1 1 1
x2
Z
3
1
Z
1
Z
3
1

1
Z 1

1
° °
Z ° 8

Y2 -Z
° Z Z 3
Z 2
Z 1 ° ° 24

° ° ° °1 ° °
Y3 -1 1 2 -'2 16

1 3 3 5 -2
Y4 -Z
° Z -Z ° -Z -Z ° ° 1 8
(2-5-1-5)

°1 ° 52 -21 -2° 31 1 11 ° ° °-8 16

Z ° -z I -z ° -I ° ° ° -16
-1
°x =-3 (0,8,0,0,0,0,0)
3
° ° ° -1 ° ° °
Actually x is a dominated extreme point and the corresponding Aex) is as

follows:
-49 -

1
7'1 - A ~ 0
2 1.

5
IA1 + 3A 2 ~ 2 2.
1
IA1 + 3A 2 ~ 2 3.
A (';.)
<
ZA1 0 4.
3 <
IAI 1 5.
1 A :; 1
Yl +
2 6.

2.5.1.2 .
Notice that A(;) is an empty set as could be seen in Figure


\. Figure 2.5.1. 2.
-50-

Therefore, no linear combination of c 1 .x, c 2 .x and c 3 .x can reach its


'V
maximtun at x.

Let us go back to Tableau (2-5-1-2) and introduce the first co1tunn.

We get:

xl 1 2 1 1 2 1 2 1 0 0 0 16

Y2 0 3 2 3 "
',6,' 2 5 2 1 0 0 48

Y3 0 2 2 1 4 1 0 1 0 1 0 32

Y4 0 1 CD -1 1 -2 -1 0 0 0 1 16
(2-5-1-6)
0 0 2 -2 0 1 1 1 0 0 0 16

0 -1 -3 0 -3 -2 -1 -1 0 0 0 -16
0 2 -2 4 2 1 2 0 0 0 0 0
1
x (16,0,0,0,0,0,0)

Construct -A(x 1 ) = A(X 1) n A and after discarding the redundant con-

straints we get:

31. 1 + 21. ~ 2 3.
2
41. ~ 2 4.
2
A(x1) -31. 1 + 21.2 ~ 0 5.

Al + 1.2 ~ 1
~ 0
1.1'1.2
See Figure 2.5.1.1.
-51-

1
Notice at Al = 3' A2 = 21 the constraints corresponding to the third,
fourth and fifth columns are active and may be introduced into the basis
according to Theorem 2.3.4. However, only the third and the fifth are
eligible since the fourth would lead us back to xo. Choose the third

column first:

Xl 1
3
2 0
CD CD 2 5
2
1 0 0
1
-2 8

Y2 0 2 0 4 5 4 6 2 1 0 -1 32

13 \
,~

Y3 0 1 0 2 ... / 3 1 1 0 1 -1 16

.1 1 1 1 1
x3 0
2 1 -2 2 -1 -2 0 0 0
2
8
(2-5-1-7)
0 -1 0 -1 -1 3 2 1 0 0 -1 0
1 3 3 5 3
0
2 0 -2 -2 .-5 -2 -1 0 0
2 8

0 3 0 3 3 -1 1 0 0 0 1 16

X2 (8,0,8,0,0,0,0)

3 3A ~ 1 4.
-YI + 2
3 >
-i-I + 3A 2 =1 5.

A(i) 3 >
construct 2AI + A2 =1 11.

<
Al + A2 =1
>
AI' AZ 0

See Figure 2. 5.1.1.


-52 -

Remark 2.5.1.3. Tableau (2-5-1-7) may be used to demonstrate the 3rd

difficulty discussed in section 2S. Notice that for the fourth and

fifth columns we have obtained identical constraints. Introducing any


3 4
of these two we get identical polyhedra, say A(x ) and A(x). However,

the associated extreme points will be different, i.e., x 3 ~ x4. This

is in agreement with the theory, since no two polyhedra may have a common

interior point, unless they are equal.

Let us demonstrate the Remark 2.5.1.3. Introduce the fourth and

fifth columns subsequently to move from x 2 to x 3 and from x 2 to x4:

2 4 5 2 16
x4 1 0 1 1 0 0 -3
3 3 3 3 3
8 4 2 2 1 32
Y2 -3 -2 0 0 1 -3 -3 -3 0
3 3
4 1 7 1 1 16
-3 -1 0 0 1 -3 -3 0 1 -3
Y3 3 3
1 1 1 1 1 32
x3 1 1 0 1 -3 0 0
3 3 3 3 3
(2-5 -1-8)
2 13 11 5 4 16
0 0 0 0 0 0 -3
3 3 3 3 3
1 2 0 0 0 -3 0 0 0 0 1 16
-2 0 0 0 0 -5 -4 -2 0 0 2 0

3 32 16
x (0,0'3'3,0,0,0)
2
- Al + 2A2 ;;; 3 1.

3>'1 + 5A 2 ~ 13 6.
3
- 3 ~ 4 11.
construct A(X ) Al + 2A2 3
Al ;;; 1
+ A2
>
AI' A2 0
See Figure 2.5.1.1.
-53-

Notice in Figure 2.5.1.1. that we would never introduce the sixth column

since the associated constraint (better, the face) does not have a point

in common with A.

Introducing the fifth column in Tableau (2- 5-1-8):

2 4 5 2 1 16
x5 3
10
1
° 1 1
3
8
3
7
3
4
° ° -3
2
3
16
Y2 -3 -3
° -1
° -3 -3 -3 1 ° 3 3

Y3 -2

1
-2
° -1
° -1

5
-4

4
-1

1
° 1 °
2
°
16
x3 -3 ° 1 -1
° -3 -3 -3 ° ° 3 3
(2- .'i.1-9)
2 13 11 5 4 16
3 ° ° ° ° 3 3 3 ° ° -3 3
1 2 ° ° ° -3
° -2° ° ° 1 16
-2
° °
4
°
16
°
16
-5 -4
° ° 2 °

x (°'°'3'°'3,0,0)

< 2
- \ + 21.2 3 1.
< 13
31. 1 + 51. 2 =3 6.

construct I(x4) A + 21.2


> 4
11.
1 3
<
Al + 1.2 = 1
>
AI' 1.2 °
See Figure 2.5.1.1.
. 3 4 3 3 4
Notice A(x ) = A(x ), x ;< x4 . This indicates that x , x are
3 4
alternative solutions for Pl.' A E A(x ) = A(x ).
-54--

We may use Tableau (2-5-1-7) also to demonstrate Remark 2.3.12 of

section 2.3.

We construct A(x 2) as:

1
"2A1 + 3A 2 ~ 1 2.

3 >
-"2A1 + 3A 2 1 4.

3 >
-Y1 + 3A 2 1 5.

<
A (x 2) - 5A 1 + A2 = 3 6.

5 <
"2A1 A2 = 2 7.

<
A2 = 1 8.

3 >
"2A1 + A2 1 11.

Graphically the A (x 2) is represented in Figure 2.5.1.3.

8.

Figure 2.5 .1.3.


1
-55-

Notice that the face corresponding to the sixth column (i. e. > constraint
2
6.) of A(x ) satisfies

H6 n A(x 2) rifutA = ~.

Using the sixth co1unm as our pivot column we get:

1 3 3 3 5 1 1
x6 2" 4 0 4 4 1 4 2" 0 0 -4 4

Y2 -2 2 0 1 2 0 1 0 1 0 0 16

1 3
Y3 -z3 5
-4 0 -4 4 0
11
4 -z1 0 1
1
-4 4

1 5 1 5 3 1 1
1 0 0 0 12
x3 Z 4 4 4 4 Z 4
1
-z3 13
-4 0 -13
4
13
-4 0 -4
7
-z1 0 0 -4 -12

1 1 0 -1 -'1 0 2 1 0 0 0 16
1 1 1 1 1
-1 0 0 0 0 0 Z 8
Z Z Z Z
8
x E V.
- 1
Let us return to the Tableau (2-5-1-6) and consider A(x ) again.
>
The constraint -3"1 + 2"2 = 0 has only one single point in common with
- 1
A(x ), specifically 1 =
"
1/3,
" 2 = 1/2. Then, of course, the adjacent
5
polyhedron resulting from introducing the fifth column, say A(x ),

will have only this one point in common with A(xl):


-56-

1 1 1 1 1
xl 1 1

1
3
1
°1 ° 3
1
3
5
3
1
-3
° ° °
1
Xs
° 2 3
2
2
1
3
1
6
10
3 6
1 2
° ° 8

Y3
° °1 3
5
-1

3
° -3 -3'
7 11
-3 -3
1 1
1
° °
Y4
° 2 3 -2
° -3 -6" -3 -6
° 1 8 (2-5-1-10)

° °1 2 -2
3 ° 1
3
1 1
°1 ° ° 168
° 2 -2
8
2"
° -1
1
2
1
°2 2"1 ° °
° 1 -3
5
3
° 3 3 -3 -3
° ° -16
X (0,0,0,0,8,0,0)

8 <
2Al + ?'2 =2 3.
3 + 3A ~
2 4.
¥1 2

2Al - k
construct A(x S) 1 ~
3 2
Al +
< °
A2 = 1
9.

>

See Figure 2.5.1.1.


AI' A2
°
Introducing the ninth column in (2-5-1-10) would lead us back to xl.

Similarly introducing the fourth column would lead to xo. So, only the

third column might be eligible for an introduction. (We would ultimately


.
move t h rough t h e ser1es 0fd egenerate 1terat10ns
.. to x3 or x4)

Remark 2.5.1.4. Notice that the point Al = ~, A2 = ibel0ngS to the

boundary of all polyhedra in Figure 2.5.1.1. The space A is completely


-57 -

covered with no "holes" and still not all nondominated extreme points
S - S 1 1
may be generated as it is demonstrated by x and A(x ) = (3'2).

Let us review all the calculated extreme points. From XO by


S
introducing the fifth column we would move directly to x , so this

transformation does not have to be performed. Also, from all the remain-

ing extreme points we cannot move to any yet unexplored nondominated

extreme point. The algorithm would end here.

Remark 2.5.l.S. Because of degeneracies we may have two or more different

polyhedra corresponding to the same extreme point (but two or more dif-

ferent bases). However, this can always be resolved through the series

of degenerate iterations. Consider the Tableau (2- 5 -1-7), introduce the

fifth column but this time yhoose 3 to be the pivot element. We get:

1 1 1 1
xl 1 1 0 0 2 0 -2 0 0
2 2 2
1 2 13 1 S 2 16
0 0 0 -1 1 -3
Y2 3 "3 3 3 3 3
1 2 1 1 1 1 16
Xs 0
3 0
3 1 1 3 3 0
3 -3 3
1 S 3 2 1 1 2 16
x3 0
3 1 -"6 0 -2 -3" -"6 0 -"6 3 3
(2-5 -1-11)
2 1 7 4 1 4 16
0 -3" 0 -3 0 4 0 -3"
3 3 3 3
1 7 1 1
0 1 0 -2" 0 -2 -2 -2" 0 16
2"
0 2 0 1 0 -4 0 -1 0 -1 2 0

-4 16 16
x (0,0'3,0'3,0,0)

4
which is the same as x
- sa-

-.!.A + > 1
:\2 4.
2 1 3"
7 <
i l·1 + 4A Z 4 6.
--4 < 1
Construct A(x ) -.!.A + 10.
2 1 1..2 3"
> 4
Al + 2h Z 3" II.
<
Al + AZ = 1
>
AI' AZ 0

See Figure 2.5.1.1.

Notice that we can move from i4 to x4 by a series of degenerate

iterations.
Z 3
Considering only the calculated x O, xl, x , x ,

see that

Let us summarize the results of the calculated example. In Figure


- 5
2.5.1.1. the decomposition of A is represented. Notice mainly the A(x ),

A(i 4). We list all N-points together with the values of the objective

functions.

0 1 Z 3 4 5
x x x x x x

c 1 .x 32 0 8 21. 33 21. 33 24
2
c .x -16 16 16 5.33 5.33 0
3 5.33
c .x 48 16 0 5.33 16
- 59-

\
\
\
\
\
\
\
\
\
\
\
\
,." ......................
..... .....

Figure 2.S .1.1.


-60-

Remark 2.5.1.6. To emphasize the ambiguity of constraint redundancy,

let us make the following remark. We have shown in Remark 2.s .1. 2.

that introducing a column corresponding to a redundant constraint does


not have to lead to a meaningful decomposition. See, for example,

Figure 2.5.1.2. We cannot, however, conclude that an introduction of

a column corresponding to a redundant constraint will always lead only


to a dominated solution. The following counter example can be given:

subject to
<
2x l + x2 20
5 <
6"xl + x2 = 10
>
xl + x2 10

0
We get the following feasible solution, say x :

1 2 3 4 5

Yl 0 0 1 0 0) 10 Xl=O, x2=10, Yl=lO, Y2=0, Y3=0

xl 1 0 0 -6 -6 0
x2 0 1 0 6 5 10
0
0 0 0 -18 -19 10 x (0,10)
0 0 0 6 5 10 + x0 € N
-61-

A(xo ) is given by

>
-18A l + 6A 2 0 redundant
>
-19A l + SA 2 0

Obviously the first constraint is redundant and is not effectively binding

A(x o ). Considering Al + A2 = 1, i.e., Al = l-A 2 , we get

24A2 > 18 redundant

24A2 > 19

Let us, however, introduce the column corresponding to the redundant

constraint, i.e., the fourth ,column:

7 5
Y2 0 0 L 1
"6 3"
1
xl 1 0 6 0 1 10 x (10,0)

x2 0 1 -6. 0 -2 0

0 0 18 0 2 40 nondominated solution
0 0 -6 0 -2 0

0
Introducing the fifth column in x (nonredundant) we obtain also a non-

dominated solution:

1 6 10
0 0 1
Y3 7 7 7
6 6 60 2
xl 1 0
7 -7 0 7 x = (60
7'720) wh'1Ch 1S
.
nondominated as can be
5 12 20
x2 0 1 -7 --.,- 0 7" seen in Figure 2.5.1.4.
19 12 260
0 0 7 -7 0 -7
5 12 0 20
0 0 -7 7
7
- 62 -

10
~
Figure 2 .5.1. 4.
-63-

LINEAR MULTIOBJECTIVE PROGRAMMING II.


3. Finding Nondominated Extreme Points -- A Second Approach
(Multicriteria Simplex Method)

The discussions of Section ~5.revealed some of the diffi-


cuI ties which we may encounter in generating Nex via the decomposition
of parametric space A.
In this section we shall discuss a modification of the simplex
method where the decision about nondominance of an extreme point is
not based on decomposition of A, though this decomposition may be a
natural byproduct, if required.

3.1 Basic Theorems

Let the set of feasible solutions X be defined asin (2-2-3)


of Part I. Consider the problem of maximizing a single linear
objective function, say cl.x. A general simplex tableau for such
a problem may be constructed as:

cl
1
.. . 1
c I cm
m +l .. . 1
c. . .. c1
n
J

r BASIS c 1 x0 Xl ... x xm+l


m ... x.
J
... x
n

1 Xl c 1i Yl0 I ... 0 Ylm+l ... Ylj ... YIn

· · · · · · · · · · · ·
· · · · · · · · · · · ·
· · · · · · · · · · · ·
m xm 1
cm 0
Ym 0 ... 1 Ymm+l ... Ymj ... ~

zl
0
0 ... 0 z(l)
m+1 ... z~l)
J
. .. z(l)
n

Table 3.1.1.
-64--

where 1
1, .•. ,n and z
o

o
If all o for all j, then x
0 0
= (Y l ' Y2 ,
0
... , Ym' 0, ... ,0)

1 0
is a maximal feasible solution with c .x is an

extreme point of X.

Remark 3.1.1. Notice XO € Nex if XO is unique, i.e., all

2 J(.1) > 0 f
or'J = m+ 1 , ... , n. In the case of alternate solutions,

we may discard those which are dominated, as discussed in Part I.

(See Lemma 2.6.)

Assume now that t linear objective functions are involved, i.e.

i
C.x i 1, ... ,t.

With each basic solution, there are now associated t criteria1


2~i) instead of single (1)
rows of 2j , i = 1, •.• ,t and = 1, ... ,n.
J
Notice that for all E J all :z~i)= O.
J
m
Then 2.(i) t: c i Yrj - i
c. and the corresponding value of
J r",l r J
i
the i th objective function is given by z i 1, •.. ,t.
o

Corresponding to each nonbasic column of the Table 3.1.1., there is

a column vector
-65-

2.
J

(3 -1-1)

With each basic solution (and its tableau), say xO , there is


associated a vector of values of t objectives:

1
z0,

2
zo·
z
o

(3-1-2)

Recall e. as it is defined in (2-3-23). If. we introduce the jth


J
column into the basis, we get a new basic solution, say xl, and
also a new vector for which the following relation holds:
-66-

or
~l 1
z z
o o

(2)
2.
J
6.
J

.e.
z 2~.e.)
o J (3-1-3)

Theorem 3. 1. 2. Given a basic feasible solution XO and assuming

6. > 0 for j E J, then


J
(i)
(a) if 2.:5 0 (i. e. all and at least one 2. < 0)
J J
then XO i N.

(b) 'f
1 2j ~ 0, t h en lntro
. d '
uClng t h e J.th co 1umn lnto
. t he

basis will lead to a dominated solution.

P
~.f For ()
a. ·
I ntro duClng th
e 'J th co 1umn lnto
. t h e b aSls,
. we get
1
a new adjacent extreme point, say x , for which zo ~ z0

because -6.2. ~ O.
J J

For (b). Introducing the jth column, we get an adjacent

extreme point xl, for which -6.2. :5 O. Q.E.D.


J J

Remark 3.1.3. For the discussion of degenerate case and that of

6. = 0, see Remarks 3.1.9. and 3.L10.Notice that the jth column


J
o
should never be introduced if 2. ~ 0 and 6. > 0 at x .
J J
-67-

Theorem3.1.4. Given a basic feasible solution xO , if there are

columns j, k such that 9j 2 j ~ 9k2k (i.e., for at least one

i' k and j, k E j, then the solution

resulting from introducing the kth column is dominated by the


solution resulting from introducing the jth column.

th
l'£QQi • Introducing the k column, we get zo; and introducing the
. th :::
J column, we get z
0
Then z
0
z
0
- 9k 2k and z
0
z
0
- 9.2 .•
J J

Since -9 k 2k ~ -9.2. then Z0 :?: z0 Q.E.D.


J J

Remark 3 .1.5. Looking at cri terial rows at each iteration, if

2~i)
]
~ 0 for all j E J then the i th objective function is at

its maximum and the corresponding basic solution is nondominated,

provided there is no column k E j with o. (i. e. no alter-

nate optimal solution).

Assume that for a basic feasible solution x there is no column

with 2. ~ O. Nonbasic columns with 2.:?: 0 and those columns k E J


] J

for which j of k, cannot be considered for an intro-

duction. Suppose also there is no row with E J.

Then the only columns eligible for an introduction are those which

are noncomparable (See notation in 1.3.) with the zero vector,

i.e., 2. ~ 0, j E J and among these only those which also satisfy


J
9k2k ~ 9j 2j' j of k. We have to determine whether the corresponding
-68-

x is dominated or nondominated.
n
Since x € X, i.e., I a .x. = b r , r = 1, ••• ,m, let us
j=l rJ J

add the following constraints to X: i 2: i-


c.x - c.x, i = 1, .. • ,.e.,
where c~x denotes the values of c~x at x € X. Adding the surplus
and artificial variables to these new constraints, we can write

i
c.x - E.1 + Yi
i-
c.x, i 1, ••• ,.e.. ( 3..1-4)

Notice that at x x, Ei = 0, Yi = 0, for all i.

Consider the following LP problem:

Max v I E.
1
i=l

subject to

Ax b
i i-
c.x, ,.e.
c.x E.1 + Yi i 1, ...

> 0,
E.1 = X ~ O. ( 3-1-5)

At the initial feasible solution x € X, all E.=O


1

and Yi = 0 imply Max v ~ O. Suppose that Max v > 0, then

at least one E.1 > O.

i i- i i
Since c.x c.x + E. we have c.x > c.x for at least one i.
1

Thus cx ~ cx and x € V.
-69-

If X is a maximal solution to problem (3-1-5), i.e., Max v 0

and all e: i = 0, then there is no feasible x e: X such that

cx ~ cx, i.e., x E N.
The above discussion may be summarized in Theorem 3.1.6 •
.e.
Theorem 3.1.6. Solve the following LP problem: Maxv,v= Le:.
i=l 1

subject to a set of constraints:

'X" {(x,e:) Ix E X, e: ii; 0, ex - e: > cx}.

Then -x e: V if and only i f Max v > 0 and x E N i f and only i f

Max v = O.
In order to derive an etficient method for an application of
Theorem 3.1.6., let us consider incorporating.e. additional constraints

of the type (3-1-4) in Table 3.1.1. We get the following general

simplex tableau (Table 3.1.2.).

At x e: X, Yi = e: i = 0, therefore, the artificial variables

Yl , ••• , Y.e. are in the basis at zero level.

Assume that for some j, Yrj ~ 0, r = m+l, •.• ,m+.e., (Yrj


can be either positive or negative), Then the corresponding
artificial vector may be removed from the basis and replaced by

Xj' Since the artificial variable was at a zero level, Xj will


enter the basis at a zero level, and the new basic solution will
stay feasible, corresponding to the same extreme point x. If this
0
r ... X X ... x. ... X ... ... X
xl m m+1 ] n £1 £.t YI Yi
0
1 xl
·.. 0 1
Y1 Cm+1)
... Y1j ... YIn 0 ... 0 0 ... 0 Y1

· ·· · ·· ·· · · ·· · ·· ·· ·· ·· ·· · ·
· · · · · · ·· ·· · · · · · · · 0
m 0X·..m
1
Ym(m+1) ... Ymj ... Ymn 0 ... 0 0 ... 0 Ym
----- --- ---- --- ---- ------------ ------------- ------------ ---- --- ---- ---- --- ---- -----

m+1 Y1 0 ... 0 ... YCm+1)j , .. Y Cm+ 1 )n -1 ... 0 ·.. 0 01


YCm+1) (m+1)

· I
· · ......
o
·· ··
I
·· ·· ·· ·· · ·· ·· ·· ·· ··
· · · · · ··· · · · · · · · ·
m+.t 0 ... 0 ••• Y Cm+i)] ••• YCm+.t)n 0 ... 0 -1
·.. 0 1
Y.t YCm+.t)Cm+1)

:l(1) ... :l~1) ... :l(1) ... 1


0 ... 0 0 0 0 ·.. 0 z
m+1 ] n 0

· · · · ·· · · ·
·
· ·· ··· ·· ·· ·· ··· ·· · ··
0 :lCi) ... :l~.t) ... :lC.t) 0 ... 0 0 ·.. 0 z.e. 0
·.. 0 m+1 ] n
-----

Table 3.1.2.
~71-

process can be continued until all artificial vectors are removed,

we obtain a "degenerate" basic feasible solution:

o 0 0
x (Yl' Y2' ••• , Ym' ~
l

where all Ei' i = l, •.. ,l are outside the new basis. If we are

able to introduce at lease one Ei into the basis at a positive

level, then
l
I E.
1.
> 0 and X E v.
i=l

If the above procedure does not remove all artificial vectors,

we must ultimately reach a state where y .


rJ
=0 for all x.
J
and all

r corresponding to the columns containing artificial variables at a

zero level. If there are k artificial variables left in the basis

at a zero level, then k of l added constraints are redundant and

do not have to be considered.

In order to simplify our analysis of Table 3.1.2., let us

define the following symbols:

C - (l x n) matrix of coefficients of l linear objective functions

A (m x n) matrix of technological coefficients of m linear constraints

Bk - (m x m) matrix of basic vectors at the kth iteration

x - n-dimensional vector

b - m-dimensional vector (corresponding to the right hand sides of the


constraints in X)

CB - (l x m) matrix of coefficients corresponding to basic vectors in Bk


-72-

1 - identity matrix (of proper order)


o - zero matrix (of proper order)

1) The original problem (See Table 3.1.2.)

V -Max ci.X .
~ = 1 , ••• ,-<-
D subject to Ax =b, x ~ 0

may have the associated initial simplex tableau written in the matrix
notation as follows:

Tableau 1.

[-:~---f--;;~---f--~: ---]
(1)

(2)
~ -<-XDl : -<-xl

Let Bk be a feasible basis corresponding to the extreme

point -x and Bk-1 ' its inverse. Then the simplex tableau associated
with Bk is given by:

Tableau 2.

(3)

(4)

Row (1) + Row (2). Also, is

an m-dimensional vector constituting the first m elements of x and

CBB~lb is an f.-dimensional vector of the values of objective functions

at x, i.e.
-73-

Let us now explore the same problem with the constraints

appended to the original constraints in

Tableau 1:

Tableau 3.

omx.e.
I
(5) A
I
1mxm
I
: b
I
°
------r---------r---------r---------r------
I I
mx.e.
(6) C :I 0-l"xm
0 :
I
ex
-] no:
-t. X ..{... I
1..{..x.(..
n n :
I

------r---------r---------r---------r------
I 1 I I

(7) -C l
1
°.e.xm l
I
°.e.x.e. l
I
°.e.x.e. l
I
°.e.xl

With respect to 13 k , Tableau 3 could be written as:

Tableau 4.
I

(8) : E-lb
: k
-------------T-----------T----------T---------T---------
1 I I I

-I : -1: : :
(9) -CsEk A+C : -lsBk : -l.e.x.e. : ].ex.e. : 0.e.Xl
-------------T-----------T----------T---------T---------
I I I I

(10) -I. : -1: : : C E-lb


lB13 k f\-C : lBEk : 0.e.x.e. : 0.e.x.e. : B k
I I I I

-1
Note. Row (8) 13 k . Row (5)

-1
Row (9) Row (6) - CJ\ • Row (5)

-1
Row (10) Row (7) + CBB k • Row (5).
-74--

Remark 3.1. 7.

(i) Notice the right hand side of the Row (9) is 0lxI
, 1
because fX- = f/'k b.
(ii) Comparing the Rows (9) and (10), notice that because
of the second column, we can write y . = _2~r-m) for
rJ J

r = m+l, ••• ,m+l and j = m+l, ••. ,n. (See Table 3.1.2.)
Since the values of z~r-m) are already known as the
J
indicators in criterial rows for the basic solution X,

we can find Yrj directly without recalculating the


tableau.

2) The new problem, as it is formulated in (3-1-5), may be constructed


by replacing Row (10) of Tableau 4 with a new criterial Row (iii):

Tableau 5.
,
(i)
k
B-lA
____________
B- 1
, ___________
~ k
0
mxl ,
l, __________ l _________ l ______
k _
°mxl :: B-lb
I I 1 I

(ii) "_,, B- 1.. : _" B- 1 : : :


" "B k'" : "B k : -1lxl : 1lxl : 0lxl
------------t-----------t----------t---------t-------
(iii) oIx n '
: 0 lxm '
: 1- lxl ': 1 lxl '
: 0

Removing the artificial variables from the basis by using


the two phase method, we get:

Row (v) -1lxl . Row (ii)

and Row (vi) Row (iii) + 1lxl • Row (v)

of Tableau 6:
-75-

Tableau 6.
I

(iv)
I
13 k
I -<.. I
0mxh
-<.. I k
0mx n i B-lb
------------------~----------------t---------~-------- -r-------
(v)
-1
tBBk A-C i
I
tl)k
-1
i
I

1f.xf. i
I

------------------t----------------t---------t-------- -~-------
-1 f.xf. i
I

°f.xl
1 1
k A-C] i k] ! i i
I I I I
(vi) 1 lxf.[C BB 1 lx f.[C BB °lXf. 11Xf. o

This llumn
can be omi tted.

Observe that Tableau 6 supplies a basic feasible solution to

our new problem. The columns corresponding to artificial variables

could be dropped from consideration.

Comparing Tableau 6 with Tableau 2, notice that the first one

may be constructed directly from the latter one. The Row (vi) repre-

sents new criterial indicators of the new problem. From this row

optimality as 'veIl as nondominance can be checked. If there is an

element of Row (vi) which is negative, say the jth , and all elements
of the jth column in Row (v) are negative, then for e. > 0 we may
J
conclude that Max v > 0 and the corresponding basis i" is dominated.

To support this conclusion, refer to Theorems 3.1.2. and 3.1.6. The

above property will be used in the subroutine for checking the non-

dominance. For the summary of this discussion, see Figure 3.4.1.,

giving the block diagram for nondominance subroutine.

Before we get into the details of the algorithmic procedure,

let us return to the decomposition of A-space.


-76-

In the multicriteria simplex method, we have appended f

criteria1 rows to the simplex tableau instead of a single one for

PA (See Table 3.1.2.). Since we are interested in reducing the

dimension of parametric space A, let us consider (!+1) objectives

to be appended.

Then wi th each E Y, there is associated a column vector

(compare with (3-1-1)):

(1)
2.
J

(2)
2.
J
2.
J

(i+1)
2.
J (3-1-6)

We would like to show that 2. determines A(X) completely.


J
For a definition of A(X) recall Theorem 2.3.2.

Let !+1
L
i=l
A.2~i)
1 J
(1 f
i=l
A.)
1
2~!+1)
J
+ f A.2~i)
i=l 1 J

2~f+1)
J
+
i=l
f A.
1
[2~i)
J

(i+1) (i)
From (2-3-4), we may see that y. = 2. and 8~ 2.
J J J J

To demonstrate the above discussed interrelation of the two

methods, let us choose some basic solution and its tableau (for

example, Tableau (2-5-1-6)):


-77-

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. II.

xl 1 2 1 1 2 1 2 1 0 0 0 16

Y2 0 3 2 3 6 2 5 2 1 0 0 48

Y3 0 2 2 1 4 1 0 1 0 1 0 32

Y4 0 1 2 -1 1 -2 -1 0 0 0 1 16

0 0 2 -2 0 1 1 1 0 0 0 16

0 -1 -1 -2 -3 -1 0 0 0 0 0 0

0 2 0 2 2 2 3 1 0 0 0 16

We may see how we can read the constraints of A(xl) from the

last three rows of this tableau. (We consider the first row as rep-

resenting the (t+l)st objective function.) We get

Nonbasic
Columns Constraint
Calculation
j

2 o+ Al (-1-0) + A2 (2-0) -AI + 2'-2 £; 0

3 2 + Al (-1-2) + A2 (0-2) 3A 1 + 2'-2 a 2 V


4 -2 + A1 (-2+2) + A2 (2+2) 4A2 £; 2 .;

5 o+ Al (-3-0) + A2 (2-0) -3A 1 + 2A2 £; 0 .!

6 1 + Al (-1-1) + A2 (2-1) 2Al - A2 a 1

7 1 + Al (0-1) + A2 (3-1) Al - 2A2 a 1

8 1 + Al (0-1) + A2 (1-1) Al a 1
-78-

The constraints denoted by I are those of h(x l ). Graphical repre-

sentation of all the constraints in two dimensions is in Figure 3.1.1.

Shaded polyhedron satisfies all the


constraints. Compare -A(x 1) here with
Figure 2.5 .1.1.

Figure 3.1.1.
-79-

Remark 3.l.B. Notice that the reduction of the dimension of para-


metric space might be useful to facilitate graphical representation,
but in general we can work with simpler, unreduced space. Let us
define 2. as in (3-1-1). Then for a given basic feasible solution xO,
J
we may write
{A I
I A.2~i) ~
i=l ]. J
0; ,1\ ~ El ,.
~ J. € -J} •

Example. Consider the fourth column in Tableau (2-5-1-6). If we


associate A3 with the first row, we may write the corresponding
constraint as

which gives

and finally 4A2 ~ 2 as considered previously in Figure 3.1.1.

Remark 3.1.9. It is not the degeneracy which could make the subproblem
indecisive, but rather a possibility of e. = 0 ].'f teJ
h .th coumn].s
1 .
J
the one to be introduced. Notice that the degenerate solution itself
does not imply e. = O.
J

Remark 3.1.10. In order to keep track of corresponding values of ej ,


we may add the coefficients of the row corresponding to a degenerate
solution to the subproblem.
-80-

3.2. Methods for Generating Adiacent Extreme Points

The next problem which has to be discussed is how to traverse


from one nondominated extreme point to another in an efficient manner.
Since a finite number of nondominated extreme points is to be gener-
ated (in general, more than one), we have to design an efficient
scheme of the order in which they will be generated.
We have shown in Theorem 2.4.2. that a set of nondominated
extreme points is a connected set, i.e., each such point can be
generated from any other by a finite series of simplex iterations.
To any connected set of extreme points (in our case Nex {set of
nondominated extreme points}) a non-directed graph r(U,V) may be
adjoined, such that:

1) Set of vertices V is formed by m-tuples of unordered integers,

v j l,2, ... ,m.


-Sl-

A vertex v E V if and only if there is an extreme point in

(Notice that two or more different v's may correspond to the same

nondominated extreme point.)

2) Two vertices vI and v 2 have a distance d ~ m if exactly d

components of v 2 differ from components of vI'

Two vertices vI and v 2 are adjacent if and

only if d = 1. Associated with adjacent vertices is either a unique

extreme point or two adjacent extreme points in N


ex

3) An arc [v l ,v 2] E U if and only if vI and v 2 are adjacent. The

graph r(U,V) is always connected. Our goal is to generate the set

Nex Basically, two approaches are considered; see e.g. [Hadley, 1961

or Gal. Nedoma, 1972].

A. Complete investigation of adjacent vertices,

B. Incomplete investigation of adjacent vertices.

~. Let rev) denote a set of vertices adjacent to the vertex v,

including v.

Remark 3.3.1. Let us agree that the phrase "to calculate v" means to
k k k
identify the basic solution (or an extreme point) x = (xl, ... ,xn )

which is associated with v = {i l ,i 2 , ... ,im}.


We shall construct two sequences Rl , R2 , ... and WI' W2 , ""

whose elements consist of vertices of r.


-82-

Let us assume that we have found the first nondominated


extreme point XO E Nex by maximizing one of the l objective
functions and discarding dominated alternative solutions.
The indices of basic vectors in the simplex tableau of XO

correspond to vo of r.
(i) Let Rl = {vol and WI = r(v o) - R1 • Discard those vertices

from WI which are dominated by using the subroutine des-


cribed in section 3.1. Transform thus WI into WI·

(ii) Calculate all vertices from WI·

(iii) Let i 1,2, ... ,r l are all

vertices in WI.

Transform W2 into W2 •

(iv) Suppose we have constructed Rs and W for s = 2,3, •.• ,k.


s

then all nondominated extreme points Nex are

found, i.e., Rk = V.

(v) If W
k ~~, calculate all vertices from Wk and form

and
-83-

where are all vertices of Wk , i l, ••. ,r k · Transfonn

and continue until Ws s = 2,3, •••

The above procedure is similar to Hadley's approach based on

complete investigation of adjacent extreme points. For small hand

computed examples, this seems to be acceptable technique because it

is easy to return to previously obtained simplex tableaus.

Larger problems, requiring the use of a computer, might not be

efficiently solvable by Hadley's approach. We have to store after

each step k not only all vertices in ~ but also all inverse matrices

from simplex tableaus corresponding to Wk + l •

~. The second approach has minimal requirements on computer


memory at the cost of traversing some extreme points more

than once. It is based on incomplete investigation of

adjacent vertices.

After constructing and its we continue

as follows:

(i) Let RI =fvo} and WI = r(vo) - RI · Transfonn WI into WI·

Choose any VI E WI and calculate VI·

(ii) Let R2 c RI U [v I }, W2 "W1 U r (v 1) - R2 • Transform W2 into W2 •

(iii) Suppose we have constructed R and W


s s
for s = 1,2, .•• k.
-84--

(iv) Construct ~+l and Wk+l as follows:

If vk _l is the last vertex put in ~, explore Wk


for a vertex adjacent to vk _l • If there is one, denote it

If there is no such point in Wk (having d=l from

Vk _l ), look for a vertex with d=2, then 3, 4, etc., until

we find the one with 1 < d ~ m. Denote such point as vk

and construct ~
-k+l and Wk+l as above. Transform

into Wk+l. Repeat the whole procedure.

After a finite number of iterations, we get W


k = $

and Rk = V .

Using the above technique, we have to store only one


simplex tableau and one matrix r x m for sets ~ and Wk at any

kth iteration. Both approaches are compared and summarized in


Figures 3.2.1. and 3.2.2.
-85-

3,-_________- L_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ,
Let [vk } denote all vertices in WI
lj.

9r----L_N_O_--..
1
8

6: __________- L____________- .

Select any v I +1 from WI +1

NO

Figure 3.2.1.
-86-

Comments on Method A -- Figure 3.2.1.

t]l\ We calculate first Nex -solution xl and define the corresponding


\V
vertex of vI of r. ,WI represents all explored vertices vI corresponding

to xl E Nex' whose adjacent vertices have not yet been considered.

~ RI +l collects all explored vertices vI corresponding to xl E Nex'


whose adj acent vertices have been considered. So initially, Rl = [v 1 }

since Ro = <1>.

~ We identify all vertices currently in WI' vk •

~ For each vk ' we construct r(vk ) as defined, i.e. all adjacent


vertices to each vk • So, ur(vk ) represents all vertices (explored or
k
,
unexplored) which are directly reachable from all vertices in WI' We

construct WI +l which collects only those which were not yet explored.

~ Obviously, if WI +l = <I> then WI +l <1>, and we stop.

~ If WI +l f <1>, there might be vertices corresponding to dominated

or nondominated solutions. We check all vI+l E WI +l for nondominance


sequentially.

01£ x
1+1
E N, then transfer vI+l

o Remove the decided vertex from WI +l and check 0 again. After all

vI+l E WI +l are decided, then WI +l = <1>. However, WI +l mayor may not

be empty.
-87-

CD If W1+ I ~, we stop. All nondominated solutions have been

found.

If 1+ I ;,
W ~, we add W
1+ I to our current R1+ I formed in

®and continue with ~ •


-88-

2r------------L-----------.

3
YES
Sto

D=D+l

4~-L--------L-------------~
NO Is there any other v I +1 E WI +1
with the distance D from vI ?

Figure 3.2.2.
-89-

Comments on Method B -- Figure 3.2.2.

~ The main difference from Method A is in this block. Instead

of adding to RI +I all adjacent vertices WI' we add only one which was

found non dominated in ~ . So, initially Rl =[V l } because R


o
= <P.

~ We find revI) only for the vertex last added to RI +I . WI +l now

contains all unexplored vertices. So, initially WI = revl) -rvJsince

Rl = vI and Wo = <p.

<p there are no unexplored vertices and we stop.

~ If WI +I # <p we choose some vertex vI+l adjacent to the vI


last added to RI +l • If there is none, we look for the "next closest,"

i.e. with the distance D+l. ' There is always at least one vertex vI+l

with the distance I ; D ; m em is the number of basic columns)

because WI +l # <p.

~ We select vI+l which is closest to vI"

1+1 1+1
~ Check the nondominance of corresponding x If x E N,

we add to our current formed in ~ and continue With~.

we subtract from the current and

continue with ~ .

Notice that transformation of WI into WI described in the

text for method B is replaced by block 6 of this routine.


-90-

Instead of lengthy numerical examples, let us demonstrate the

differences between the two techniques on a simple connected set of


extreme points and its associated graph r. See Figure 3.2.3.

are vertices corresponding to Nex

are vertices corresponding to


dominated extreme points.

Figure 3. 2.3.
-91-

Method A.

iii" = q, STOP.
4

Method B.
-92-

4) R4 = {Yo' vI' vz' v3 } w4 = {V4 ' vs ' vIO' v ll ' v I9 }

W4 = {v 4 ' vS ' v lO ' Vll }

5) R5 = {Yo' vI' vZ' v 3' V4 } W5 = {v S' v6 ' v 7 ' vIO' Vll } - Ws

6) R6 = {Yo' vI' vZ' v3' v4 ' V6 }

W6 = {vS' v 7 ' vg , vIO' v11 }

7) R
7
= {Yo' vI' v2 ' v 3' v4 '" V6 ' vS}

W7 = {v 7 ' vg , vIO' v 11 }

8) R8 = {v 0' vI' vZ' v 3' v4 ' v6 ' v S' Vg }

W8 = {v 7' v 8 ' vIO' v ll ' VIS}

g) R9 = {Yo' vI' vZ' v 3 ' v4 ' v6 ' vS' vg , v 7}

W9 = {v 8' vIO' v 11 ' VIS}

10) RIO = {Yo' vI' v Z' v 3' V4 ' v6 ' vS' vg , v 7 ' v 8}
WIO = {vIO' v11 ' VIS}

11) Rll = RIO U vIS Wll = {viD' v ll ' v16 }

IZ) R1Z = R11 U v I6 WI2 = {v iD ' v ll }


-93-

13) R13 = R12 U v lO W13 = {v ll ' V17 }

14) R14 R13 U v 17 W14 = {v ll }

15) R15 Rl4 U vn Wl5 {v12 ' v13 ' v 14 }

16) Rl6 Rl5 U v l2 W16 {v13 ' V14 }

17) R17 R16 U v 13 Wl7 = {v14 }

18) R18 R17 U v l4 Wl8 = 4> STOP.

3.3. Computerized Procedure -- An Example.

The multicriteria simplex method, as it has been discussed in


previous paragraphs, is stili not fully suitable for computerization.
Some of the computational details and additions are discussed and
demonstrated on fairly complicated example. Later on we introduce a
basic block diagram from which the computer program has been developed.
We shall use an additional (composite) objective function to
facilitate some of the computations. This composite function is obtain-
ed as a simple sum l. (See Tableau 6 of section 3.1.).
L c1 .x
i=l

A general simplex tableau can now be sketched as follows:


-94--

1 o Y1(m+1)

x
m
o 1
Ym(m+1)

-----------------------j
o o 2(1)
m+1
••• 2(1)
n z~

o o /-o
I I

: (.t+ 1) (.t+ 1): Composite


o o :I _2m+l
_ _ _ _ _ _ _ _ _ _•••
_ _ _ _ _ _Zn
_ _ _ _ _ ~:
Ptmction

(3-3-1)

After obtaining a basic solution which is nondominated, we


consider only nonbasic vectors leading to noncomparab1e adjacent
extreme points as eligible for pivot columns. At some basic solution,
no objective ftmction will be at its maximum and only noncomparab1e
adjacent basic solutions will be reachable. Then a subroutine must
be employed to establish nondominance.
-95-

The subroutine may be summarized as follows (See also Figure


3.3.1.):

(I) Check the (i+l)st row of the simplex tableau and find the
largest negative coefficient. If there is none -- PRINT since
000
x (Yl' ••• , Ym) is nondominated.

(2) In the column chosen at (1), say the jth, we may find two cases:

(i) All coefficients of the criterial part of the simplex


tableau (framed in Tableau (3-3-1)) are nonpositive.
If the corresponding e. > 0 then XO E V.
J
(ii) There is at least orie positive coefficient in the column
j of the criterial,part of the simplex table. Perform
simplex iteration with the largest positive coefficient to
be the pivot element. Go back to (I).

Remark 3.~1. The situation described in (2i) can occur, however,


only at later stages of the subroutine since we have assumed that
only Z. '" 0 or 2. ~ 0, j E J is true for nonbasic columns.
J J

If e. =0 in (2i), we perform the iteration in view of Remark


J
3.1.10.
-96-

Construct the simplex tableau


for xl E X

1
Is there any j E J such
(ktl) < 0
that Zj ?

YES

YES J.re all z.](i) -,;;; o ?


i=l, ... , J,
NO
4
Consider the "criteria" part of
YES the simplex tableau together with
rows having 9.
]
=0 for j E J

5 ~
Choose z.(i) > 0 as the pivot
]
element

Perform the simplex iteration

Choose y . > 0 as the pivot


r]
element
6r-----~L- _____
Is there any YES
y . > 0 ?
NO r]

Figure 3.3.1.
-97-

Comments on Nondominance Subroutine -- Figure 33.1.

~ First, we check the composite, i.e. (i+l)st criterial row. If

there is no negative element, then Max v =0 and xi E N. (See Theorem

3.1.6.)

~ If there is at least one element in~ which is negative, say

2(i+l)
j ,
and if all 2~i) ~ 0 (Notice that 2. " 0 since 2~i+l) < 0)

0 .
J J J
(i)
then go to ~ . If there is at least one 2. > 0 go to
J

(DIf e.
J
> 0, ·
ob v~ous 1Y b
y '~ntro d '
uc~ng t h e J.th co 1umn, we get

Max v > 0 and hi E V. If 8 j = 0, go to ~ .

o Form the problem by using only "criterial" parts of the simplex

tableau as it is framed in (3-3-2). If ej > 0, we go directly to ~

since it will stay positive for the iteration. If e. = 0, for j E J


J
we add the corresponding row(s) to the problem to keep track of any

change in e .. (See Remark 3.1.10.) .


J

(i)
~ Perform the simplex iteration on 2. > 0 fromeD and go
J
back to (2) .
~ If we added the rows giving ej = 0 in 0 then we may explore

them after each step for any y. > O. If there is none, then
rJ
actual e. > 0 since we assume bounded solution exists. If there
J
is Yrj >'0, then e. = 0 and we perform the next iteration around
J

The framed part of Tableau (3-3 -2) describes the subroutine


-98-

situation. Notice that only the framed part of the tableau is

considered in the case of nondegenerate solution.

Xl ·.. x
m Xm+l ·.. X
n Yl ... Y.e

xl 1 ·.. 0
Yl(m+l) ·.. YIn 0 ... 0 Yl
0

·· ·· ·· ·
··· ·· ··
· ··
··
· · · ·
xm 0 ... 1 Ym(m+l) ·.. Ymn 0 ... 0 Ym
0

r-----------------------------------------
·..
---1

Yl 0 ... 0
2 (1)'
m+l
2(1)
n
1 . .. 0 0 (3-3- 2)

··· ··· · ·
·· ··
·
··· ··· ··
· ·
Y£. 0 ·.. 0 2~:i ·.. 2(£.)
n
0 ... 1 0

J
J

·..
J

... .. .
J
0 0
J
J 2~:;1) 2(£.+1) 0 0 0 J
J n J
J
~----------------------------------------- ---.J

Now we are ready to construct a basic block diagram describing

some of the main features of the algorithm. This is just a logical

scheme which had to be expanded in much more detail for actual pro-

gram construction. See Figure 3.3.2.


-99-

Find the feasible solution

? r-NO=--------f\.!?St~o~pu
'----,----",

1
NO Is Zj(i) 2EO for at least Introduce the j
one i? all j E J
column to get xi+l
2
YES i
Is x unique?
NO

Is there any j E J such Consider all


such j E J
4
Check the nondo!l1inance by using
LP subroutine

Does introduction of the


jth column lead to an
lored basis?
NO

5r--------~~~-L---,
Is there any other j E J such that
~ 9k 3 k for all k E J ?

NO
6
Is there any j E J such that Zj ~ 0 ? o
YES
Xi EN? j-"I.U-_ _ _ _ _ _-,

S
Store all those jth columns
which would lead to an unex-
plored basis

Figure 3.3.2.
-100-

Comments on Multicriteria Simplex Method Figure 3.3.2.

Q) For some basic solution xi € X, we first check whether any of


the objective function is maximized at this point. This assures that
xi or some alternative solution will be nondominated.

~ If xi uniquely maximizes at least one objective function, then


i € N and we print xi.

~ We use Theorem 3.l.2.(a) here. If 2. S 0 exists for at least


J
one j € J, then i € V if a.
J
> o. If the corresponding j th column

leads to an unexplored basis, we make the transformation and go back

to 0. Otherwise, go to ®.
~ Use the nondominance subroutine as described in Figure 3.4.1.

~ We look for a column which would give us a solution dominating

all other points reachable from xi. If there is one yet unexplored,
we make the transformation. Otherwise, go to ~ •

~ Here we look for columns which would lead us to solutions


"noncomparable" to xi. If there are none, go to ~. If there are

some, but i E V, go to CD .
~ Select and store those jth columns (and their bases) which would
lead to an unexplored solution. These are bases which might potentially

be nondominated.

~ Whenever storage (2) is empty, we stop. (Compare with WI+l in


Figure 32 .2.
-101-

To demonstrate the procedure, we shall analyze a larger example

step by step.

1
c .X xl + 2x 2 x3 + 3X 4 + 2XS + x7

2
V-Max c .X x2 + x3 + 2x4 + 3x S + x6

3
c .X xl + x3 - x4 - x6 - x 7

<
subject to xl + 2x 2 + x3 + x 4 + 2xs + x6 + 2x 7 16

<
-2x 1 x2 + x4 + 2xs + x7 16

<
- xl + x3 + 2Xs 2x 7 16

<
x 2 + 2x 3 - x 4 + Xs 2x 6 - x7 16

x. ;; 0, i 1,2, ... 7.
1

The composite function, denoted L is calculated as:

L = 2x 1 + 3x 2 + x3 + 4x4 + Sx S '

Now we may construct the initial simplex tableau:


1
x (0,0,0,0,0,0,0) (3- 3-3)

Y1 1 2 1 1 CD °1 ° °
1 2 1 16

Y2 -2 -1
°1 1 2
° -2 °1
°1 ° 16

Y3 -1
°1 ° 1 -2° -1 ° °
2
°1 16

Y4
° 2 -1
° ° ° 16

-1 -2 1 -3 -2
° -1
° ° ° ° °
° -1 -1 -2 -3 -1
°1 ° ° ° ° °
-----
-1
° -1 1
° 1
° ° ° °
--------------------------------------------------------- °
-----

L °
-2 -3 -1 -4 -s
° ° ° ° ° °
-102-

We can see that xl € V by looking at the fifth column and recalling

Theorem 3. 1. 2.

1 1 1 1 1
xs 2" 1
2" 2" 1
2" 1
2" ° ° ° 8

Y2 -3 -3 -1
° ° -1 -1 -1 1
° ° °
Y3 -2 -2
° -1
° -1 -4 -1
° 1
° °
1 3 3 5 1
Y4 -2"
° 2" -2" '
° -2" -2 -2"
° ° 1 8

° ° 2 -2
° 1 1 1
° ° ° 16

. 3 1 1 1 3
2" 2 2" -2" ° 2" 3 2" ° ° ° 24

-1
° ° °
-1
° ° ° ° °
1 1

------ -------------------------------------------------------- -----


1 3 3 5 5
~ 2" 2
2" -2 ° 2 5
2 ° ° ° 40

(3-3-4)

No objective is at its maximum at x2 (0,0,0,0,8,0,0). Use the


subroutine to establish nondominance:
-103-

0 0 2 -2 1 1 1 1 0 0 0

3 1 1 1 3
2 -2 3 0 1 0 0
2 2 2 2

-1 0 -1
CD 1 1 0 0 0 1 0

1 3 3 5 5
2 2
2 -2 2 5
2 0 0 0 0

-2 0 0 Q 3 3 1 1 0 2 0

7
C0
3 1
2 0 0 1 0 1 0
2 2 2

-1 0 -1 1 1 1 0 0 0 1 0

13 5 3
-1 2 0 0 4 0 0 0
2" 2 2

0 4 0 0 5 10 4 1 2 -3 0

7 3 1
1 2 0 0 1
2 2 0 1
2 0 PRINT

9 3 3
0 2 -1 1 2 0 1 0
2 2 2

0 4 0 0 5 10 4 0 1 2 0 =l> Max v = 0
-104-

For the next step, replace x5 by x4 :

x4 CD 2 1 1 2 1 2 1
° ° ° 16

Y2 -3 -3 -1
° ° -1 -1 -1 1
° ° °
Y3 -1
° 1
° 2 ° -2
° ° 1 ° 16

Y4 1 3 3
° 3 -1 1 1
° ° 1 32

2 4 4
° 4 3 5 3
° ° ° 48

2 3 1
° 1 1 4 2
° ° ° 32

-2 -2 -2
° -2
° -1 -1
° ° °
----- --------------------~---------------------------------- -----
-16

I 2 5 3
° 3 4 8 4
° ° ° 64

(3- 3-5)

This point x 3 = (0,0,0,16,0,0,0) is nondominated because, for

example, the first and the second objective functions are at their

maximum here (unique). PRINT x 3 . (Similarly, the auxiliary objective

function is at its unique maximum.)

Next introduce the first column. (The other choice might be

the third column.)


-105-

Xl 1 2 1 1 2 1 2 1 0 0 0 16

Y2 0 3 2 3 6 2 5 2 1 0 0 48

Y3 0 2 2 1 4 1 0 1 0 1 0 32

Y4 0 1
CD -1 1 -2 -1 0 0 0 1 16

0 0 2 -2 0 1 1 1 0 0 0 16

0 -1 -1 -2 -3 -1 0 0 0 0 0 0

0 2 0 2 2 2 3 1 0 0 0 16

----- ------------------------------------------------------- -----


L 0 1 1 -2 -1 2 4 2 0 0 0 32

(3- 3-6)

4
Again, PRINT X = (16,0,0,0,0,0,0) since the third objective function
is at its maximum here. The alternate solution resulting from intro-
ducing the third column gives us a vector of values of objective
functions:
o 16

8 o
16 16

24 32
-106-

Introduce the third column next:

xl 1
3
2 0
CD 2
3
2
2
5
1 0 0
1
-2 8

Y2 0 2
° 4 5 4 6 2 I
° -1 32

Y3
° 1
° 2 3 3 1 1
° 1 -1 16

1 1 1 1 1
x3
° '2 1 -2 2 -1 -2
° ° ° 2 8

° -1
° -1 -1 3 2 1 0
° -1
°
1 5 5 1 1
° -2
° -2 -2 -2 -2
° ° ° 2 8

0
°2 2 2 2
° ° 3
° 1 16

---- --------------------------------------------------------- -----


1 3 3 9 1
L
° 2 ° -'2 -2 3
2 2
° ° -2 24

(3-3-7)

PRINT x5 = (8,0,8,0,0,0,0) since the third objective function


is at its maximum and the two alternate solutions x4 and xS are
non comp arab Ie.
-107-

Introduce the fourth column:

x4
3
2
1 0 1
CD 3
4 5
3
2
3 0 0
1
-3 3
16

8 4 2 2 1 32
-3 -2 0 0 1 -3 -3 -3 1 0
Y2 3 3

4 1 7 1 1 16
Y3 -3 -1 0 0 1 -3 -3 0 1 -3
3 3

1 1 1 1 1 32
1 1 0 1 -3 0 0
x3 3 3 3 3 3

2 13 11 5 4 16
0 0 0 0 0 0 -3
3 3 3 3 3

5 4 11 5 1 64
2 0 0 0 0 0 -3
3 3 3 3 3

4 2 1 1 2 16
-3 0 0 0 0 -3 -3 -3 0 0 3 3

L 1 2 0 0 0 5 7 3 0 0 -1 32

(3-3 -8)

6 32 16
Here x = (0,0'3'3,0,0,0) and since no objective function

is at its maximum, we have to use the subroutine for nondominance:


-108-

2 13 11 5 4
3 0
3" 3" 3 -3 1 0 0 0

5 4 11 5 1
2 0 1 0 0
"3 3 3" 3 3
4
3" 0
2
-3
1
3"
1
3" CD 0 0 1 0

1 2 5 7 3 -1 0 0 0 0

-2 0 3 3 1 0 1 0 2 0

0
21 3 1
2 1 (9 0 1 0
6" 2 2
1 1 3
-2 0 -1 -2 1 0 0 0
2" 2

13 5 3
-1 2 4 0 0 0 0
"2 2 2

0 4 5 10 4 0 1 2 3 0

21 3 1
1 2 1 0 0 1 0
6" 2 2
13 5 5
0 4 1 1 0 2 0
"2 2 2

0 4 5 10 4 0 0 1 2 0
-109-

PRINT x6 . Next introduce the fifth column in the simplex


tableau for x6 -- it will give us the same values for all objective

functions, i.e., we can PRINT x 7 without further analysis (x 6 and


7
x , though different, both have identical image in the value space.)
We get:

Xs
2
"3 1 0 1 1
CD "3
5 2
"3 0 0
1
-"3 3
16

10 8 7 4 2 16
-3 -3 0 -1 0 -"3 -"3 -"3 1 0
Y2 "3 3

Y3 -2 -2 0 -1 0 -1 -4 -1 0 1 0 0

1 5 4 1 2 16
x3 -"3 0 1 -1 0 -"3 -"3 -"3 0 0
"3 3

2 13 11 5 4 16
0 0 0 0 0 0 -"3
"3 3 3 "3 3

5 4 11 5 1 64
2 0 0 0 0 0 -"3
"3 "3 3 "3 3

4 2 1 1 2 16
0 0 0 0 -"3 -"3 0 0
"3 "3 "3 3
----- -------------------------------------------------------- ------

L 1 2 0 0 0 5 7 3 0 0 -1 32

(3-3-9)
-110-

7 16 16
PRINT x = (0,0':3,0':3,0,0).

We may introduce the sixth or seventh column. Introducing the sixth

column, we get the following:

x6
1
Z- 4
3
° 4
3
4
3
1
CD -1
2
° °
1
-4 4

Y2 -2 2
° 1 2
° 1
° 1
° ° 16

3 5 1 3 11 1 1
Y3 2 -4
° -4 4 ° ""4 -z- ° 1 -4 4

1 5 1 5 3 1 1
x3 -
2 4 1
4 4 ° 4 Z- ° ° 4 12

3 13 13 13 7 1 1
-Z- -""4
° -""4 -""4
° -4 -z-
° 0 -4 -12

1 1
° -1 -1
° 2 1
° ° ° 16

1 1 1 1 1
-1
Z- ° Z- Z- ° Z- ° 0
° Z- 8

------ -------------------------------------------------------- ------

3 7 15 15 3 1 1
l: ° °
-Z- -4
° -""4 -""4
° 4 Z- 4 12

(3-3-10)

8 8
So, x = (0,0,12,0,0,4,0). To check the nondominance of x ,

we must use the subroutine:


-111-

-z3 13
-4 -4 -4
13 13 7
-4
1
2"
1
-4 1 0 0 0

1 1 -1 -1 2 1 0 0 1 0 0

CD
1 1 1
-1
Z 0 -1 0 0 1 0
Z Z 2

15
-z3 -4
7
-4 -4
15 3
4
1
Z
1
4 0 0 0 0

,---....,
3 1 13
-8 0 0 0 -Z 3 1 0 0
Z 2
-1 2 0 0 3 1 1 0 1 2 0

-2 1 1 1 1 0 1 0 0 2 0

9 1 15
-9 2 0 0 - 4 0 0 0
2 2 2
'---'"
t

Because of the first column and 61 > 0, x8 E V. Let us now

introduce the seventh column:


(3- 3-11)
2
x7 -3 0
3 3 4
1
2
0 0
1
-5
16
5 5 5 5 5 5 5
12 7 1
-5 0 -2 7
-5
4
0 -5
2
1 0
64
Y2 5 5 5 5 5
2 2 7 12 4 64
Y3 -5 0
11
0 -3 0 1
5 5 5 5 5 5 5
1 4 1 4 3 1 2 48
1 -5 -5 0 0 0
x3 5 5 5 5 5 5
4 11 11 ;U 7 1 3 32
-5 -5 0 -5 -5 5 0
5 0 0 -5 5
1 1 8 2 48
-5 0
11
-5 -5
11
0 -1 0 0 -
5 5 5 5 5
6 1 1 1 2 1 3 32
-5 5 0
5 5 -5 0 -5 0 0
5 5
---- ---------------------------------------------------------- -----
9 11 21 21 3 1 2 48
l: -5 -5 0 -5 -5 -5 0
5 0 0
5 5
-112-

9 48 16 To check the nondominance,


So, x = (0,0':5' 0,0,0':5)'
use the subroutine:

4 11 11 11 7 1 3
-5 -5 -5 -5 5 5 -5 1 0
° °
1 1 11 11 8 1 2
5 -5 -5' -5 -5 5 5 ° 1
° °
6
-5
1
5 CD 1
5
2
-5
1
-5
3
5 ° ° 1
°
9 11 21 21 3 1 2
-5 -5 -5 -5 -5 5 5 ° ° ° °

~
-14
° ° ° -3 -2 6 1
° 11
°
-13 2
° 0 -6 -2 7
° 1 11
°
- 6 1 1 1 -2 -1 3 0
° 5
°
107
-27
'-.../
2
° ° -9 -4 13
° 0 -5
°

We see that x9 is obviously dominated because of the first

column and 81 > 0.

To summarize the results, we have explored the following

noncomparable extreme points and established their dominance V


-113-

or nondominance N:

2 4 5 6 / -8
x9
3 7
It X X X X X X

1
c .x 16 48 16 0 5.33 5.33 -12 -6.4

2
c .x 24 32 0 8 21.33 21.33 16 9.6

3
c .x 0 -16 16 16 5.33 5.33 8 6.4

N N N N N N ,0 0

(3-3-12)

Notice that though all the extreme points are noncomparab1e


8 9'
with each other, x and x are dominated. We can see that for

examp 1e: x 8.1S d '


om1nate db Y' 2
1x2+1
25x , i.e.

9
Also, X is dominated by the same point, i.e.,
-114-

3. 4. Computer Analysis.

The block diagram in Figure 3.3.2.can be expanded into

more detailed form, suitable for computer analysis. Most computations


were performed on IBM 7040 and thus the computing times should
be only of a relative value.
The program has been coded in Fortran and the entire code is
presented in App'x A3 together with all subroutines. The program
is currently limited to eight constraints, forty variables and eight
objective functions. This can, however, be easily expanded, actually
a problem with 20 variables, 12 constraints and 5 objectives has also
been computed. No claim is m~de with regard to professional efficiency
of the code since the author is not a computer programmer by profession.
Some difficulties, additional to the traditional simplex method,
are introduced by the fact that many different bases must be explored
and many of their adjacent bases stored. The problem of how to traverse
all eligible bases efficiently, i.e. without repetition, has been
resolved.
One other problem comes from the fact that the problem is
usually sufficiently lengthy and complex so that round-off errors may
accumulate to the point where they obscure the actual result. Especially
using the inflexible definition of N-points as we do the round-off
errors may lead us to declare a point as nondominated when it actually
is dominated and vice versa. This problem is hard to resolve though
double precision arithmetic might be used to improve the accuracy.
-115-

Careful analysis of print-outs is, however, always necessary to


eliminate the round-off errors.
Three example problems have been constructed to check the
speed, efficiency of the search, subroutines, storage and other aspects
of the program.

(1) The first example computed is the problem of section 3.3.


whose initial simplex tableau is given in (3-3-3). This is by no
means a trivial example since degeneracies as well as alternate
solutions appear. We can compare computer results with hand cafcu-
lations as they are summarized in (3- 3-12). From printouts on pa-
ges 197 - 199 of Appendix A2 we get:

x2 x4 x5 , x6 x3 x7

c 1 .x 16 16 0 5.33 @) 5.33

c2 .x 24 0 7.99 21.33 @ 21.33

c3 .x 0
® ® 5.33 -16 5.33

~ 40 32 24 32 64 32

These results correspond exactly to those obtained from hand


analysis. Total time has been 2.881 minutes.

(2) The second problem is constructed especially to demonstrate


the speed and the efficiency of the program. The problem contains eight

constraints, eight variables and three objective functions.


-116-

2X l + SX 2 + x3 - x4 + 6xS + SX6 + 3X 7 - 2xS

v-Max SX l - 2x 2 + SX 3 + 6x S + 7x6 + 2x 7 + 6x S

xl + x 2 + x3 + x4 + Xs + x6 + x 7 + Xs

subject to:

xl + 3x2 - 4x 3 + x4 - Xs + x6 + x7 + Xs < 40

sX l + 2X 2 + 4x3 - x4 + 3xS + 7X6 + 2x 7 + 7xS < S4

4X? - x3 - x4 - 3XS + Xs < IS

-3X l - 4x 2 + SX3 + 2x4 + 3xS - 4X 6 + SX 7 - Xs < 100

l2x l + SX 2 - 'x + 4x4 + x6 + x7 < 40


3

xl + x2 + ,x3 + x4 + Xs + x6 + x7 + Xs > 12

SX l - l2x2 - 3X3 + 4x4 - Xs < 30

-Sx l - 6x 2 + l2x 3 + x4 - x7 + Xs < 100

x. ;:
1
o. i = 1 ••••• 8

The problem is int~ntionally complicated. For example. the third


objective appears also as the sixth constraint with > Also
notice that for example. the third constraint equals the second
constraint minus the second objective function. From printouts on

page 201 of Appendix A2 :


-117-

1 2 3
x x x

1
c .x 173.33 G6.8J) 170.55319

c 2 .x

3
178.66 176.33
€3
c .x 35.11 38.611 €.34~

L 387. Ii 391. 77 388.96453

Though the problem may have up to 128flOextreme points, the actual

number of Nex is only three for this example. The program handled
the situation quite effectively as it is reflected in total time

of 0.814 minutes. The next problem has been also done on IBM 360/91.

(3) This problem is specifically constructed to contain very

large number of nondominated bases. It contains eight constraints,

eight variables and five objective functions. Some dependencies to

introduce degeneracies and alternate solutions are also present.

v-Max
-118-

subject to:

xl + 3X2 - 4x 3 + x4 - Xs + x6 + 2x 7 + 4xg < 40


=

SX 1 + 2X 2 + 4xS - x4 - SxS + 7x 6 + 2x 7 + 7xg < g4

4x 2 - x3 - x 4 - 3x S + Xg < 19

-Sx 1 - 4x 2 + gx s + 2x4 + 3x S - 4x6 + SX 7 - Xg < 100

12x1 + 8x 2 - Xs + 4x4 + x6 + x7 < 40

xl + x2 + x3 + x4 + Xs + x6 + x7 + Xg > 12

8x 1 - 12x 2 - SxS + 4x 4 - Xs < SO

-sx 1 - 6x 2 + 12x 3 + x4 - x7 + Xg < 100

x. > 0, i 1, .•. ,8,


~

In this problem, the round-off errors accumulated enough to

obscure some of the results. However, after some analysis

of printouts for Problem (3) , we may form the following table

of results:

1 2 3 4 5
c .x c .x c .x c .x c .x

1 115.93 -28.75 87.18 -3.18 26.13

2 116.08 -29.07 87.00 -3.00 25.55

3 64.39 16.74 81.13 2.87 17.65

cant.
-119-

1 2 3 4- 5
c .x c .x c .x c .x c .x

4- 37.20 4-9.33 86.54- -2.54- 22.%

5 110.84- -22.72 88.12 -4-.12 27.12

6 82.72 28.52 111. 24- -27.24- 28.99

7 (117.25) -27.75 89.50 -5.50 27.00

8 -17.73 106.18 88.4-5 -4-.4-5 26.64-


9 -37.52 111.59 74-.07 9.92 22.63

10 -29.00 106.55 77.55 G.4-5 29.4-4-

11 -12.09 102.56 90.4-6 -6.4-7 31.66

12 -19.09 125.21 106.13 -22.13 33.4-3

13 -37.72 135.90 98.17 -14-.17 31. 78

14- -36.53 159.20 122.66 -38.66 33.24-

15 -35.00 173.00 138.00 -54-.00 29.66

16 -2.78 86.78 84-.00 0.00 12.70

17 -36.37 105.85 69.4-8 14-.52 14-.59

18 8.51 170.55 (179.06) -95.06 (39.35)

19 10.00 168.94- 178.94- -94-.94- 37.4-9

20 -0.50 (176.83) 176.33 -92.33 38.61

21 5.33 173.33 178.66 -94-.66 35.11

22 24-.00 150.00 174-.00 -90.00 39.00

23 85.84- 38.35 124-.18 -4-0.18 33.56

24- 95.4-0 -1.38 94-.01 -10.01 31.08

25 31. 75 51.4-2 83.17 0.83 26.22

26 7.05 63.63 70.68 13.31 28.11

27 77 .84- 12.10 89.94- -5.94- 31.30

cant.
-120-

5
1
a .x
2
c .x
3
c .x c 4 .x c .x

28 35.74 82.51 118.25 -34.25 34.60


29 9.31 93.48 102.80 -18.80 29.03
30 86.73 -11.89 74.84 9.16 13.08
.,,---..
31 66.34 -0.34 66.00 18.00 15.00

32 66.56 -0.56 66.00 18.00 19.84


33 72.00 -6.00 66.00 18.00 14.62

34 30.40 35.60 66.00 18.00 14.00

35 33.12 32.88 66.00 18.00 14.35

36 15.89 50.11 66.00 18.00 16.30

37 -17.73 83.73 66.00 18.00 14.80


38 29.34 36.65 66.00 18.00 21.37

39 30.43 35.57 66.00 18.00 21.36

40 30.62 35.38 66.00 18.00 21.42

41 29.72 36.28 66.00 18.00 21.50

42 56.30 -4.70 51.61 18.00 12.00

43 48.85 -0.77 49.63 18.00 12.00

44 40.80 12.20 53.00 18.00 12.00

45 47.13 2.71 49.84 18.00 12.00


46 72.00 2.67 74.67 9.33 12.00

47 95.27 -21.27 74.00 10.00 12.55

~~~
48 19.55 46.46 66.00 24.44

49 -9.06 75.06 66.00 18.00 17.88

50 117.19 -24.95 92.24 -8.24 28.09

51 -13.34 99.19 85.85 -1.85 28.22

cant.
-121-

1 2 3 4 5
c .x c .x c .x c .x c .x

52 -19.18 118.22 99.00 -15.04 29.75

53 49.25 35.93 85.18 -1.18 21.20

54 84.66 -8.00 76.60 7.33 13.30

55 -18.00 102.00 84.00 0.00 24.00

56 86.80 15.60 102.40 -18.40 24.80

57 91.60 -7.50 84.00 0.00 15.30


58 15.88 55.14 71.03 12.96 25.57
59 36.34 101. 60, 138.00 -54.00 34.34

60 27.40 73.40 100.80 -16.80 31.13

61 23.13 69.95 93.00 -9.07 22.24

62 31.75 100.21 131. 90 -47.96 23.12

63 16.00 140.00 156.00 -72.00 26.66

64 49.28 77.60 126.80 -42.80 22.50

65 -20.72 121.58 100.86 -16.86 31.80


66 47.80 46.12 93.94 -9.94 17.36

67 -0.17 71. 74 71.57 12.42 21.00

68 -12.00 96.00 84.00 0.00 12.00

69 81. 37 -6.40 75.00 9.00 12.94

70 26.00 66.23 92.33 -8.33 15.57

It is seen that we have identified 70 different Nex-points.

Individual maxima of all objectives are encircled to simplify the


review of data.
-122-

LINEAR MULTIOBJECTIVE PROGRAMMING III.


4. A Method for Generating All Nondominated Solutions of X

We have described some techniques for calculating all nondominated

extreme points of X, Nex . The problem which remains to be solved is


concerned with generating a complete set N from Nex '

Though this might seem to be superfluous work, we may well imagine

situations where a non-extreme nondominated solution might be preferred

to any extreme nondominated solution.

4.1. Some basic theorems on properties of N

Lemma 4.1.1. Let xl,x2 E X. Then


1 2 1 2
(a) If x ,x E V, then [x ,x ] c V.
1 2 1 2
(b) Ifx E N, x E V', then (x ,x ] c V.
(c) If x E
1 2
(x ,x ) and x E N, then [x 1 ,x 2] c N.
2 1
(d) If x E (x ,x ) and x E V, then ex ,x 2) 1 c V.

()
a ·
S1nce x 1 ,x 2 E V, th ere eX1st
. some x-1 ,x-2 E X su ch t h at cx-1 ~ cx I
-1 -2
and cx 2 ~ cx2. Let x = Axl + (I_A)x 2 . Notice, AX + (l-A)x E X. Then
Acx l + (I_A)cx 2 ~ Acx l + (1_A)cx 2 = cx.
For 0 ~ A < 1 let x x2 A(xl_x2), i.e., x 1 2
(b) = + E (x ,x ]. Notice
-
that x
-
= x-2 +
1 -2
A(X -x ) E X, and cx = cx-2 + ACX
I
- ACX
2
=
2 1
(1-A)cx 2 + Acxl ~ (l-A)cx + ACX cx.
1 2
(c) Notice x ,x E V implies x E V, i.e. , contradiction.
2 1 2
x E N or x E N, x E V, this implies x E V and we have a contradiction
1 2
again. So x E N must imply xl,x2 E N. In order to see that [x ,x 1 c N,
-123-

suppose there is some x E (x 1 ,x 2) such that x- E V. Then x E (x 1 ,x)


- or

- 2
X E (x,x). According to part (b) of this lemma, (x 1 ,x]
-
c V, and
- 2
[x,x ) c V which imply x E V.
(d) Proof is similar to that for part (c) since this is a partial

converse of (c). Q.E.D.


Let us state a generalization of Lemma 4.1.1. We shall denote

C[x l , ... ,xk] to be a convex hull of the set of points {xl, ... ,xk }. Also

recall from our notational agreement in section 1.4. the relative interior

of X is denoted xl as opposed to Int X.

I k
Theorem 4. 1. 2. Let H = C[x , ... ,x ] be the convex hUll of the set of
1 k
points {x , ... ,x }. Then

(i) If there is x E HI such that x E N, then HeN;


(ii) If there is x E HI such that x E V, then HI c V.

iiI k
(i) If at least one x E V, X E {x , ... ,x } then there is some

X E H such that x E (x,x i ). Then xi E V implies x E V. Assume that


there is some x E H such that x E V. Then there exists at least one

point ~ E H such that x E (x,~). It follows from Lemma 4.1.1. (a) and

(b) that x E V, a contradiction.

(ii) If x E HI and x E N, then HeN. This implies HI c Nand

X E N which contradicts the original assumption x E V. Q.E.D.

Let the feasible set X be defined:


-124-

x= {xix E En. A x ~ b }
'r r '
(4-1-1)

where A is the rth row of matrix A, r


r
= l, ... ,m+n. Notice that this

definition of X differs from that in (2-2-3). Also, the nonnegativity

constraints are incorporated in general inequality constraints.

Let R {l, ... ,m+n}. For x E X define:

R(x) {r IrE R; A x b}. (4-1-2)


r r

Notice if x E ax then R(x) ~ ~.

Let N = {xi}, i E K = {I, ... ,k}. Since any such extreme point
ex
i Each xi E N
of X is also a boundary point of X, then R(x ) ~ ~, i E K.
i <
divides the components Arx = b r , r E R into two subsets:

(a) active constraints satisfying ArX i = br

and (b) inactive constraints satisfying A xi < b


r r
i
The above defined set R(x ) identifies the active constraints. Let

R(x i ) be the complement of R(x i ), then R(x i ) U R(x i ) = R, i E K.

For a particular r E R, define

I (r) "{i liE K; A xi b } (4-1-3)


r r

as a set of indices i E K such that their corresponding xi E N make the


rth constraint active, i.e., ArX i b r for all i E I(r).

Let I(r) K-I(r). Given r E R define

H {xix E En. A x = b } (4-1-4)


r 'r r

as a (n-l) dimensional hyperplane in En.


-125-

Let Pr c[xili E I(r)], i.e., Pr is a convex hull of all points

xi, i E K, belonging to Hr' Note, Pr C Hr'

As a corollary to Theorem 4.1.2., we have

Corollary 4.1.3. The following statement holds:

(i) if there is x E
pI such that x E N, then Pr c N.
r
(ii) if there is x E
pI such that x
r
E V, then pIr C V.

Remark 4.1.4. Two extreme points of a closed polyhedron are called adjacent

if the line connecting them is an edge of the polyhedron. If by performing

a single simplex transformation we can from one of the two points reach the

,other point, then both points, are the endpoints of an edge of X.

Given a family of subse.ts of R

t}, (4-1-5)

where r is the index of subsets, t l, ... ,m+n, we can define

(4-1-6)

and

(4-1-7)

*
Note Xt(r) is the r ili hyperplane defined by t linear equalities and Xt(r)

is the set of Nex-points contained in Xt(r).

Recalling definition (2-1-5) we may write

(t) A (t) (t) >


IJ r . r ,IJ r O} . (4-1-8)
-126-

Remark 4.1.5. Notice that G[Xt(r)] is independent of x in Xt(r). That


is, it depends only on t, r. Thus we will denote it by G;t). Also,
notice ].l(t) is the rth t-dimensional vector, independent of r, so we
r
*
may use ].l(t). We can see Xt(r) c Xt(r).

According to Theorem 2.1.5. and Corollary 4.1.3. we can prove the

following:

Theorem 4. 1. 6.

(a) If H> n G(t) ;< <jl, *


then C[Xt(r)] c N
r
(b) If H;o, n G(t) <jl,
*
then C[xt(r)] I n N <jl.
r

~. Recall the definiti9n of G(x) in (2-1-5).


(a) Observe that for each x E Xt(r) n X, G;t) c G(x) by definition.

By assumption

H> n G(x) ;< <jl for all x E Xt(r).

In view of Theorems 2.1.1. and 2.1.5. the assertion of (a) clearly

follows.
(b) Observe that G(x) = G~t) for each x E C[x~(r)]I. The conclusion

(b) follows immediately from Theorems 2.1.1. and 2.1.5. Q.E.D.

Corollary 4.1.7.

4.2. An algorithm for generating N from known Nex


Our goal is to generate all nondominated faces of X given Nex In

the following discussion we shall use t to indicate a number of active


-127-

constraints, i € K the index of Nex-points, and I' the index of subsets


of R having exactly t elements (recall definition of 1t in (4-1-5)).
We shall recursively define a sequence of matrices Net):

t = 1,2, ••• ,m+n (4-2-1)


(' = 1,2, ••• , Kt .

which associates Nex-points with the Kt subsets of the set 1t' which has
exactly t active constraints.
Let, for example, t = 1, i.e., all constraints are considered
individually. Then KI ,. m+n and N (1) = [n~!)J may be written as follows:

1 2 ... m m+I " . m+n

x1 (1)
nIl
(1)
n I2 ... (1)
, n lm (1)
n i (m+I) ... nn~+n)
x2 (1)
n 21 (1)
n 22 ... (1)
n 2m n~~~+l) '" n~~~+n)

xk (1)
nk1 (1)
nk2 ... (1)
nkm (1) ... (1)
nk(m+l) nk(m+n)

(1)
111
(1)
112 .. . (1)
11m
(1)
I1m+1 ... (1)
I1m+n

if xi makes the ('th constraint of 11 active


where
otherwise
(4-2-2)
k (1)
and Ln. , I' I,2, ••• ,K I •
i=I 11'
-128-

The ~(l) indicates the number of points xi, i E K which make the
r
rth constraint active. Notice that 0 ~ ~(l) ~ k, r 1, ... ,K I .
r
Observe, if the rth column of N(l) contains only one or less non-

zero elements, then ~(l)


r
= 0 or ~(l)
r
= 1. If ~(l)
r
= 0, then the hyper-

plane XI(r) (see (4-1-6)) contains no Nex -point, i.e., the rth constraint
is not active for any xi E Nex Since any N-point is a convex combination

of Nex-points, then all x E Xl (r) must be dominated. Therefore, the rth

column may be deleted from N(l) and from further consideration.

Similarly, if ~~l) = 1, then Xl (r) contains exactly one Nex-point.

Then all other points x E Xl (r) will be dominated and the rth column may

be deleted from N(l). (See, Theorem 4-1-2).

In general, a matrix N(t) may be written as:

1 2 ... K
t

x1 (t)
nIl
(t)
n l2 ... (t)
n lK
t

x
2 (t)
n 21 (t)
n 22 ... (t)
n 2K
t

x
k (t)
n kl
(t)
n k2 ... (t)
nkK
t

~l
(t)
~2
(t) ... (t)
nK
't

where
(t)
n ir .{ i
1 if x makes the r
in J t active
0 otherwise
th
subset of t constraints

(4-2-3)
-129-

and n(t) = In~t),r=1,2, .•. ,Kt'


r i=l l.r

Notice that n~t) indicates the number of points xi, i E K which make
t h e r th t-tup I
ef0 . .
constral.nts actl.ve.

Remark 4.2.1.
Ca) Suppose the rth column of N(t) contains no non-zero element
(n~t) = 0). (See (4-1-7)). If n (t) =1
r '
*
(and Xt(r)) contains exactly one Nex-point. *
Then C[Xt(r)] c N by
assumption. In both cases we may delete the rth column from N(t) .

(b) Notice Xt(r) is an (n-t) dimensional subset of X (assuming


nonredundant constraints only). xt(r) contains nondominated subset
*
C[XtCr)] only if there is at least n-t of Nex-points in Xt(r). Then
the rth constraint cannot be deleted. We may delete the rth column

from N(t) only if XtCr) contains less than n-t of Nex-points. If


*
C[xt(r)] c N, then all of its boundary, which is the intersection of
*
xt(r) with other hyperplanes, is also nondominated. We may then delete
the rth column from N(t). To check whether C[X;cr)] c N, use Theorem

4.1.6. For simplicity, let us denote Nex = X* •


Remark 4.2.2. Each column of NCt ) corresponds to a face of (n_t)th

dimension of the polyhedron X c En.


To restate our problem we want to find all subsets of X* such that
their convex combination belongs to N.
Let us define a subset of X* as follows: (See 4-1-7)
_130-

(4-2-4)

The set X* of k elements is to be decomposed into subsets Xt(r)


* such that
*
C[xt(r)] I
c N. *
(Recall that C[Xt(r)] is a convex combination of all
Xi co
*
Xt(r).)

Let us start with the matrix N(l). All columns must contain at

least two l's, since we have deleted those with n~l) =0 and n~l) = 1.
The number of such columns is Kl , i.e., r = 1,2, ... ,K l . We shall inves-

tigate all columns successively in the order of decreasing n(l), i.e.,


r
n~l) ~ n~!i. Choose the column with Max n~l) first. In the case of a
r
tie let the choice be arbitrary.

The investigation of columns now proceeds naturally from 1 to Kl .

~. Let n(l) = Max n(l) r = 1, ... ,K l . Two cases may be distinguished:


1 r r '

(A) C[X~(1)]IcN

(B) C[X~ (1)] liN.

Case (A). In this case the Xl* (1) is a part of the resulting decomposition

(solution to our problem). All possible convex combinations of all

X
i
co
*
Xl(l) belong to N, *
·i.e., C[Xl(l)] eN. Now we may delete the first

column from N(l). Similarly, we delete all columns r such that

Xl* (r) c Xl* (1), r = 2,3, ... ,K l . This follows from Corollary 4.1.7. That
is, if the column N(l) of N(l) is such that
r

then N(l) will be deleted.


r
-131-

Case (B). In this case Xl* (1) is not part of the solution since

C[X~(I)]I ~ N and not all possible combinations of all xi E X~(l)


belong to N. The corresponding Nil) may not be deleted from N(l) .

We repeat this step for all r = 1,2, ... ,K I , i.e., until all

x~(r) are investigated.


Let us define a reduced matrix N(t) of N(t):

Definition 4.2.3. The reduced matrix N(t) is such a matrix N(t) in


*
which all columns r with C[Xt(r)] I c N have been eliminated. The number
of columns Kt of N(t) is reduced to Kt of N(t). Notice that Kt ~ Kt .

We see now that Step 1 leads to the reduction of N(l) to N(l) .

This reduced N(l) with KI columns corresponding to x~(r), r = 1, ... ,KI ,

such that C[X *I (r)] I ~ NI , will be the starting point for the next step.

Step 2. From N(l) with KI columns let us construct N(2) with K2 columns

in the following way:


i
ifx makes the r th subset of

(a) Let
C )2 active
otherwise
N(l) and N(l) be two different columns of N(l) • A
z w
column N(2) of N(2) could be derived as follows:
r
If 1(2) = 1(1) u 1(1) we define
r z w'

(4-2-5)

where 0 indicates elementwise multiplication of vectors NO)


z
and N(l) .
w
-132-

More precisely.

• n lw
(1) )

n (1)
K.w

Remark 4.2.4. Now n~2)


l.r
=1 implies that xi makes the zth and the wth

constraints active. Notice that r is the index for the set {z.w}. Also.
1;2) contains exactly one more element than I~l). For generalization of

(4-2-5) see Step 3.


So we generate N(2) with columns N~2). where r = 1.2 •...• K2
(b) K2 columns of/N(2) may be again reordered so that
(2) > (2) > ••• __>
nl = n2 = nK(2) The reduction procedure (Step 1) is now applied
2 *
sequentially for r = 1.2 •••. ,K 2 and all X2 Cr) are investigated. We delete
the columns with C[X;Cr)]I c N and those with X;(k) c X;Cr). k ~ r. from
N(2) (see Corollary 4.1.7.). The result will be a reduced matrix N(2)

with K2 eolumns.
Step 3. In general. NCt ) is transformed into N(t+l) using the following

rule:

Nr(t+l) = N-(t) 0-N(1)


z' w • r
= 1 K
..... t+l (4-2-6)

where N(t) and N(l) are columns of NCt ) and of N(l) respectively. and
z w
I(t+l) = I(t) ICI).
r z U w
The Kt +l columns of N(t+1) are reordered in such a way that
-133-

n l(t+l) -~ n 2(t+l) -~ •.. -~ n K


(t+l) • The reduction procedure is then applied
t+l
as in the previous steps sequentially for r = 1,2, .•• ,Kt +l •
*
All Xt+l(r) are then investigated. *
Columns with C[Xt+l(r)] I eN

*
and those with Xt+l(k) *
c Xt+l(r), k ~ r, r = 1, •.• ,Kt +1 are deleted.
Resulting is N(t+l) with Kt +l columns.

The whole routine goes through the sequence of transformations:

i.e., for t = 1,2, ••. ,m+n. The routine ends whenever some N(t) becomes
a vacuous matrix. For summary see Figure 4.2.1.

Remark 4.2.5. As a matter of fact, the routine ends always before reach-
ing N(n) because all n~n) < 2, r = 1,2, .•. ,Kn .
,

Remark 4.2.6. The advantage of investigating the columns of N(t) in the


order of decreasing n~t) is due to a simple way of identification of all
*
subsets of xt(r) *
in the case that C[Xt(r)] I c N.
In the next section, we will demonstrate the described procedure

on simple examples.
-134--

NO 8 -.---_ _ _ _ _.1..---:_,
Delete the column of
N(t) corresponding to
Has all columns of X*t (r) as well as all
YES
those corres onding to
subsets of X*t (l')

LJ..-________________________ ~

*
Construct Xt(l') successively
Is
for all remaining columns in
N(t)

5
From all remaining columns
I----~ of N(t) form N(t)

Figure 4. 2 . 1.
-135-

Comments on matrix reduction - Figure 4.2.1.

1. Construct the matrix Net) as defined in (4-2-3). Initially N(l)

as in (4-2-2).

2. Calculate n~t) for all columns of N(t), r = 1, ... ,Kt . Delete all

those with n~t) $ n-t in view of Remark 4.2.1.

3. If N(t) is vacuous, stop. Also consider Remark 4.2.5. Otherwise

go to 4.

4. *
Construct all xt(r) and go to 7.

5. *
All columns for which C[Xt(r)] f N are retained. All such columns

form reduced matrix N(t) .

6. Calculate N(t+l)from N(t) using the prescription (4-2-6). Go back

to 1.

7. Check whether C[x~(r)] c N by using Theorem 4.1.6. If YES we can


.
pr1nt all x i
E
*
Xt(r) since all their convex combinations are non-

dominated.

8. Delete all columns corresponding to x~(r) and its subsets in view

of Corollary 4.1.7. Go to 3.
-136-

4.3. Numerical eXamples


For simplicity we shall demonstrate the matrix reduction procedure
and the nondominance subroutine separately.

4.3.1. An example of matrix reduction


We shall consider a simple 3-dimensional case mainly because it is
instructive to have a graphical representation fOr a reference. Consider
the 3-dimensional convex polyhedron in Figure 4.3.1. with the set of non-
.
dominated extreme p01nts {1
x, x 2, x3 , x4 , x5 , x6}. Let us assume that
only the shaded combinations (faces) are nondominated. Our goal is to
identify them by using the matrix reduction method. We do not assume
any exact form of objective' functions, we just want to show how the
matrix reduction method identifies the shaded areas.
-137-

The polyhedron in Figure 4.3.1. is defined by five linear constraints

plus the nonnegativity conditions which are, however, redundant in this

particular example. They define five faces of the highest dimension

(i.e., two) which are denoted as:


1 2 3 4
1. C[x ,x ,x ,x ]
1 456
2. C[x ,x ,x ,x ]
126
3. C[x ,x ,x ]
2 3 5 6
4. C[x ,x ,x ,x ]
345
5. C[x ,x ,x ]
123456
Let X* = {x ,x ,x ,x ,x ,x } c N.
First we shall construct the matrix N(l) 1,2,3,4,5,

i. e., K1 = 5.

renumbered
r
columns I 2 3 4 ($] Nonnegativity conditions
1(1) O} {2} {3} {4} {5} x2
r xl x3

x1 1 1 1 1 I
2
x 1 1 1 1 I
3
x 1 1 1 1

x4 1 1 1
x5 1 1 1 1
6
x 1 1 I 1

L 4 4 3 4 3 may be omitted because of


their redundancy.

Notice that all (1) 2 and Max 11(1)


"r >
r
r
4. We can recognize that the

subsets Xl* (r) are as follows:


138-

*
Xl(l) {x1 ,x2,x3 ,x4 }

Xl* (2) {x1 ,x4 ,x5 ,x6 }

X*l (3) {x1 ,x2 ,x6 }

Xl* (4) {x2 ,x3 ,x5 ,x6 }


345
Xl* (5) {x ,x ,x }

Using the nondominance subroutine we would check now c[x~(r)]I for non-
dominance. From Figure 4.3.1. we see that C[X*I (5)] I c N => C[X *I (5)] eN.
Then C[x3 ,x 4 ,x5] c N is a part of the resulting decomposition and the
fifth column may be deleted. No further columns may be deleted. Thus
the reduced matrix N(l) has'been obtained with the number of columns

1<1 4.
The next step is to construct the matrix N(2) = [nI;)] , r = 1, .•• ,K 2 .
The rth column of N(2) is obtained by (see (4-2-5)):

Repeating this for all possible pairs z and w we obtain all columns of
N(2). We shall consider only those columns r with n;2) ~ 2 and get
N(2) :
-139-

renumbered
r
columns
1(2)
r U,2} U,3} U,4} {2,3} {2,4} {3,4}
1
x 1 1 1
2
x 1 1 1
3
x 1
4
x 1
5
x 1
6
x 1 1 1

L 2 2 2 2 2 2

Notice r 1 , 2, ... , 6, i. e . " K2 6. Now we can identify the subsets:

X*2 (1) {x1 ,x 4 }, X*2 (2) {x 1 ,x 2}, X*2 (3) {x 2 ,x3},

X*2 (4) {x 1 , x6 }, X*2 (S) {x 5 ,x 6 }, X*2 (6) {x 2 ,x6 },

Checking C[X *2 (r)] I for non dominance we can see from Figure 4.3.1. that

only C[X *2 (l)], C[X *2 (3)] and C[X *2 (S)] are part of N, so the first, third
(2) 1 4 2 3
and fifth columns may be deleted in N . Then C[x ,x ], C[x ,x ], and
5 6
C[x ,x ] are part of the reSUlting decomposition.
We have obtained the reduced matrix N(2) with the number of columns

K2 3. To demonstrate that the algorithm ends before reaching N(3)

because all n;3) = 1, r = 1, ... ,K 3, we shall perform the next step:


-14-0-

From N(2) we construct N(3).

Remark 4.3.1. Notice also that at each step we may check the rows of

N(t) and delete those which have all zero elements, e.g., we may delete

rows for x 3 , x4 and x 5 in N(2) of our example.

To get the new columns for N(3) we multiply the columns N;2) 0 N~l) .
Notice that only N(2) G N(l) N(2) 0 N(l) and N(2) G N(l) are sufficient
22'23' 43
since all remaining combinations are identical. We obtain:

r
renumbered
columns 1 2 3
1(3) {l,2,3} {l,3,4} {2,3,4}
r
1
x 1 X3* (1) {xl}
2
x 1 X3* (2) {x2}
3
x 1 X3* (3) {x6}

I 1 1 1

2
But x 1 , x , and x 6 are nondominated by definition so the entire

step is unnecessary.

The final decomposition may be sketched as follows:

~
{X3'X4'X5}
1 4
X* {x I ,x 2 ,x 3 ,x 4 ,x56}
,x
{x ,x }
{x 2 ,x 3 }

{x 5 ,x 6 }

345 14 23 56
meaning that C[x ,x ,x ], C[x ,x ], C[x ,x ] and C[x ,x ] c N.
-14-1-

4.3.2. An eXample of nondominance subroutine.


Consider the following problem:

("
4x l + x 2 + 2x3

V-Max c .x = xl + 3x 2 - x3
3 x 2 + 4x 3
c .x = -xl +

subject to

<
xl + x 2 + x3 3 1.
<
2x 1 + 2x 2 + x3 =4 2.
<
X
xl x2
° >
3.

xl
=
° >
4.

~2
° >
5.

x3
° 6.

Solving this problem we get the following set

of nondominated extreme points:

1 xl 2 4 5
X (0,0,3,0,1,0) x x3 x x
2 1
X (0,1,2,0,0,1) c .x 6 5 6.5 2 5
3 1 1 2
x (2'2,2,0,0,0) c .X -3 1 0 6 4
4 3
x

X
5
(0,2,0,1,0,2)

(1,1,0,1,0,0)
c .x 12 9 8 2
°
We would like to decompose X* = {x1
,x2 345.
,X ,X ,x } 1nto all subsets whose

convex combinations are nondominated. First construct N(l) = [n~;)J.


-14-2-

renumbered
r
columns Ci5 0 3 4 5 ~
1(1) O} {2} {3} {4} {5} {6}
r

x1 1 1 1 1
2
x 1 1 1
3
x 1 1 1
4
x 1 1 1
5
x 1 1 1

L 3 4 3 3 1
x
2

>
Excluding the fifth column, all 11 (1) 2, ... = 1, .•. ,6.
r
' >
First establish H> and W:

{(A 1+4A 2-A 3 , 3A l +A 2+A 3 , -A 1+2A 2+4A 3) I (A l ,A 2 ,A 3) > O}

H~ {(A 1+4A 2-A 3 , 3A 1+A 2+A 3 , -A 1+2A 2+4A 3 ) I (A l ,A 2 ,A 3) ~ OJ.

Since Max 11(1) (1) start with


... r 112 '

G~l) {~(2,2,1)1~ > OJ. We may set up the following system:

2~ = Al + 4A2 - A3
2~ = 3Al + A2 + A3
>
~ = -AI + 2h2 + 4A3 AI ,A 2 ,A 3 > 0, ~ = O.
-143-

This may be rewritten as the system of homogeneous equations and investi-


gated for the existence of a nontrivial solution:

-"3
-l · 0)
O}
:}
3Al Al 3A3 Al
-" =
-2A l + 3A 2 - 2A: =0 + 3A 2 - 8A3 A2 A3 =0
SAl - 3A 2 + 7"3 =0 - 3A 2 + 8A3 = 0 0

The last system has a nontrivial solution, e.g., Al 9, A2 8, A3 = 3.


Therefore

deleted as well as the sixth column, since X*l (6) c X*l (2), i.e.,
C[x 4 ,x5 ] c N.
Next construct Gil) = {~(l,l,l) I~ > ol. We obtain the following
system:

Al + 4A2
'3}
~ 2Al+ 2A2
~ 3Al + ),2 + A3 + -2A l + 3A 2

~ = -AI + 2A2 + 4A3 4Al A2

:}
5 5
),1 + A2 - '2A3 Al + A2 - '2A3 =
5A 2 + 7>.3 -OJ
=: +
7
A2 - 5),3 =
-5A 2 + 7>.3 0

So the last system has a nontrivial solution, e.g., Al 5.5, ),2 7,


-1i14-

"3 5. Therefore

Le. ,

1 Z 3
C[x ,x ,x ] c N.

We may delete the first column from N(l) .

Next construct G~l) = {~(1,-1,0) I~ ; a}. Obtain the following

system:

5"z
4"1 +
' O}
~

-~ -+ -AI + ZAZ + 4"3 = 0

o SAl + 3A Z 4A3 0

From the first equation of the last system we see that the solution does

not exist for Al > 0, "Z > O. Therefore, the third column of N(l) may

not be deleted.
Construct G(1) {~(-1,0,0)1~ > a}. Then we have:
4

-~

o 3Al + AZ + "3

o -AI + ZAZ + 4"3

where from the second equation it is seen that the system is not satisfied

for Al > 0, AZ > 0, A3 > O. The fourth column cannot be deleted. We end

up with the reduced matrix N(l) with columns 3 and 4 left. Construct N(Z):
r
renumbered
columns 1
1(2) {3,4}
r

x
1
1 Since ni 2) = 1, we would delete
2
x 0 the first column and end up with
3
x 0 a vacuous matrix. The x 1 is assumed
4
x 0 to be non-dominated. The algorithm
5
x 0 ends.

L 1

<
The final partition:

2 3 4 5
{x ,x ,x ,x }

X
* 1 2 345
{x ,x ,x ,x ,x }
123
{x,x,x},

2345 123
meaning that C[x ,x ,x ,x ] and C[x ,x ,x ] c N.
-14-6-

5. Additional Topics and Extensions.

5.1. Alternative Approach to Finding Nex~

In search for the most efficient computation of nondominated


extreme points some other possible approaches may be suggested.
In the multicriteria simplex method described earlier, we were
checking the nondominance using linear programming subroutine for each
basic solution which was found to be noncomparable to some other non-
dominated extreme point. We can find the first nondominated solution
by maximizing anyone of our t objectives and by discarding all domin-
ated alternative solutions. Then we introduce only those columns which
are noncomparable with zero vector, i.e., z. ~ 0, j = m+l, , •• , nand
J
among these only those which also satisfy 9kzk ~ 9 j Zj , j#k for all j,
k = m+l, ••. , n. Recall that
(1)
Zj
(2)
z.
J

and E).
J
= Min
r
Yrj > o}
z~t)
J

from (3-1-1) and (2-3-23) respectively.


It is obvious that computing all points which are noncomparable
to the first nondominated extreme point and also noncomparable with
each other is much faster and more efficient than solving LP subroutine
at each iteration.
This set of all noncomparable extreme points is in general larger
-147-

than Nex but all points of Nex are included in it. The problem remains

of how do we Screen out the points which are dominated.


Let us demonstrate the concept using a simple graphical example.

Let us assume that e[x] = Yc E2, according to general notation intro-


duced at the beginning of Section 2, and that the situation is described

in Figure 5.1.1.
e\<.}

:;
"1=
t=
<=
E
E
lillllilWULllll1
:;2 \0

y <=
Eo
E
illllJUIl
y~

J't
o
Figure 5.1.1.
Observe in Figure 5.1.1. that yl, ./, y3 and y4 are all non-

dominated extreme points of the e[X]. By multicriteria simplex


method we would start, say, at yl then move to y6 and find it to be

dominated. We would discard y6 and move from y 1 to y2 Doing this


we generate only y1, 2 3
y ,y and y4 .

The alternative approach concentrates first on finding all ..!!Q!!.-


comparable points. So we would start at yl and establish its nondominance.
-148-

Then we would find adjacent noncomparable points, y6 and y2 From


these we would find again adjacent noncomparable, i.e., y5 and y 3
5 2 3
Because y is clearly dominated by y and y we may delete y5 at

this stage. Adjacent noncomparable to y 3 is y4 . So we would end

W1·th the se t {I
y, y2 , y 3 , y4 ,y 6} Wh·1Ch conta1ns
. Nex = {yI , y2 , y3 ,
6
Notice that y cannot be discarded by direct comparison of

extreme points since it is dominated only by some convex combination


1 2
of y and y. So the problem is how to discard the points of the type

y6, i.e., those which are noncomparable with others but dominated only

by some convex combination of others.

There are some properties of e[xJ for linear case which allow us

to work with e[X] or the image instead of with X directly. For example,

an extreme point of e[xJ is the image of one or more extreme points in


x. Also, if X has k extreme points, then e(xJ can have at most k ex-

treme points. For proofs see e.g. [Charnes, Cooper, 1961J.


2
Let us assume that we have generated k extreme points {xl, x ,

...... , of X which have the property that they are all noncomparable

with each other and also with some nondominated extreme point (and

therefore with all nondominated extreme points). We can construct

the following matrix:


el(x) e 2 (x) ... e"(x)
x
1 e 1 (xl) 9 2 (xl) ... e"(x l )
2
x el (x 2) e 2 (x 2) ... et (x2)
.
.
x
k el(x~ e2 (x k) ... At (xk)

For example, let us assume the following numerical values:


-14-9-

fll(x) e2 (x) fl 3 (X) fl4-(x)


1
x 6 3 4- 8
2
x 6 8 4- 6
3
x 8 2 6 4-
4-
x 7 2 5 6
5
x 6 7 5 6
It is obvious that xl, x 2 and x 3 are nondominated since each function

reaches its unique maximum at one of these points. Points x4- and x 5
are not dominated by any of these points. But they can still be dom-

inated or nondominated. 4-
Like, for example, x is dominated by a convex
combination of xl and x 3 with ~ = t, i.e.,

x* = hI + I 3 => e(x*) = ~(xl) + le(x 3 )


2? ,2 2
I 1
= 2"( 6 , 3, 4-, 8) + 2"( 8, 2, 6, 4-) = (7, 2. 5, 5, 6)

and (7, 2.5, 5, 6) <! (7, 2, 5,6).


On the other hand, x 5 is not dominated by any convex combination

of these points. Therefore, x 5 is nondominated.


The following are some concepts designed to resolve the above
problem.

5.1.1. The concept of cutting hyperplane.


If we are dealing with just two objective functions, the resolu-
tion of the discussed problem is simple. We may actually resolve it

graphically since the image e[X] c £2. So we can plot all non-
1 2
comparable vectors (e (x), e (x)) for all noncomparable extreme
points {xl, x 2 , ..• , xk} and delete those which are dominated, by

simple inspection.
-150-

If we want to avoid graphical plotting we may develop a so-called


cutting hyperplane.
= c~x, 2 2
Let alex) a (x) = c.x, where
c~x = cix
1 1+ . . • + cnx
i n ' J.. 1, 2.
We can use the following lemma:
Lemma 5.1. l.

Let (xl' Yl) and (x 2 ' Y2) be two distinct points in E2. Then

is the straight line connecting (xl' Yl) and (x 2 ' Y2)·


Note,

c- Y2 -Y 1
---,
x -x
2 l

See Figure 5.1.2.:


'j

Yl - - - ~ - - - - - -- - -
I
I

o x

Figure 5.1. 2.
Since we consider only two objectives we may further simplify
1 2
our notation. Let for any x e X, ~ = a (x), v = a (x).
1 2 1
Let x ,x e X be unique (i.e. nondominated) maxima of e (x)
and a 2 (x) respectively.
-151-

Denote (Ill' u l ) (el(x l ), e 2 (x1» and

(~, u2) (e l (x2) , e 2 (x 2» ,


as it is represented in Figure 5.1.3.:

v :gl(X)

o
Figure 5.1.3.

Definition 5.1.2. The cutting hyperplane is defined as

= { (u.u) Iu _(U 2 -U l )l..l = U1 _ (U 2 -U1)uJ. (5-1-2)


L
~-~ U2 -l..l1
Define,

> {
L= = (u,u)lu
_ 2 l) ~ (U -U
u -u u ul _ -U l ) UJ ,
(Uu2 -u (5-1-3)
2 1 2 1

and
L< its complement.
-152-

Define,

e[X] = {(U1 v) III e1 (x), u

and

(5-1-4)

Theorem 5.1.3. Let XO be a basic feasible solution of X. Then


o 1 2 ::=:
=e =e
0 0
x e Nex if and only if, for liO (x) and Vo (x), (U O' uo) e L-.

Proof. The proof can be made simply by graphical analysis, looking

at Figure 5.1.3.

a) e[x] c:::. S[X]


b) Any (u,u) e L< is dominated by some (u,v) e L.
Any (u,u) e L< n s[X] is dominated by some (u,u) e [ (lil , vl)'

(U2 ' u 2) ] i.e. closed line segment connecting (Ul ' u l ) and

(U 2 , v 2). Because of a) the necessary condition is proven.


::=:
c) Let XO is an extreme point of X and let (u O' uo) e L-. To
prove the sufficient condition we have to show that xO e N
ex
Suppose XO t Nex' i.e. some x e X exists such that (u,u) ~

(u O' uO)· Then of course (u o ' uO) can be expressed as a


convex combination of some (u,u) ~ (u O' uO) and of some
o
(u,u) e L which contradicts the assumption that x is an

extreme point. Q.E.D.

Exa!!!]21e 5.1.4. Suppose el(x) = 2x l +3x 2 , e2 (x) = 4x l +x 2 and


1 2
let x (4, 6) and x Then lil = el(xl) = 26, VI = e 2 (xl )
= (8, 3) .

22,
~ e 1 (x 2) = 25, and u2 = e2 (x 2) = 35.
-153-

Then L can be written as

35-22 22 _ 35-22 26
u - 25-26 . u 25-26 .

i.e. u + l3u = 360


Substituting for u = e2 (x) and u = e1 (x) we can actually express
the cutting hyperplane directly for Xc En:
4x l +x 2 +13(2x l +3x 2) 360

30xl + 40x 2 360.

Substituting xl and x 2 we see that the conditions of the definition

of cutting hyperplane are satisfied. Graph see in Figure 5.1.3.

8\x)

35

jl"'~) d(x) "";0


22
"
2.G

Figure 5.1.3.

This looks like a very plausible concept to be generalized.


Some effort has been expanded toward this problem. The conclusions

are that though the concept of the cutting hyperplane may be gen-

eralized to higher dimensions, there is no guarantee that such a


-154-

structure always exists.

The above theory can be used safely only in two dimensions, i.e.,
for two objective functions. In higher dimensions the problems of
uniqueness and degeneracy of the cutting hyperplane make its use
questionable at this point. A couple of graphical examples should
clarify the difficulties.

Example 5.1.5. In this example we have a case of degenerate lower


dimensional image e[x]. Here the e[x] is just 2-dimensional polyhedron
in three dimensional space.

el(x) e2 (x) a3 (x)


1 16.6 3.3 6.6 Respective maxima are:
x
243
2 40 0 10 x , x , x
x
3 2.85 11.4
x 37
4 10 10
x 10

Notice that xl is dominated by ~3 + ~4 = (23.5,6.425,10.7). To


construct the cutting hyperplane we use the graphical analysis in
Figure 5.1.4. Actually only x 2 and x4 are to be used to construct
the cutting hyperplane. The cutting hyperplane is:

alex) + 3e 2 (x) 40
Substituting x 3

37 + 3.(2.85) = 45.55 => x 3 e N


ex
-155-

16.6 37
o ,
<10
I I
I

I I
, I
- '- - ~I ,- - (

- ..,
I
-

I
I

10

Figure 5.1.4.
-156-

Substituting xl

1
16.6 + 3.(3.3) 26.5 ~ x 'Nex '

Example 5. 1. 6. In this example we are facing a problem of multiple


cutting hyperplanes. Let

91 (x) 92 (x) a3 (x)


x1 9 8 8 Maxima: x1 , x 2 , x 3

x
2 8 9 8 Minima: x 4 , x 1 , x 2 x3
3 9 See Figure 5.1.5.
x 8 8
4
x 0 8.8 8.8

For example, using xl, x 2 , x 3 to construct the hyperplane, we

get

Substituting x4 we get

8.8 + 8.8 = 17.6

which would indicate that x4 is dominated when it is actually non-


dominated. Using x 3 , xl, x 2 and x4 we find that the cutting hyper-
plane does not exist. This is the case of multiple cutting hyper-
14214 3
planes, constructed with x , x ,x and with x , x , x :

494 9l( x) + 49El ~1993(x)


4 2( x ) + .,. 396
-157-

'I
1 /
g ;
- - - - -.£. - - (
/ I I'

' 6(x1) /

/
1 /
I /
9 -
'/
• -1;0

Figure 5.1.5.
-158-

ld

4 1 41 2 4 3
W (x) + 49 a (x) + W (x) = 396.

A nondominated point must produce values larger or equal to 396


1 both hyperplanes. For example, point (8.4, 8.5, 7) gives us value
LO.l for the second cutting hyperplane. But since it gives us only
)4.6 for the first one, it must be dominated; for instance, by
(1 + ~2 = (8.5, 8.5, 8) •

•1.2. Nondominance in lower dimensio~

Following is a short discus~ion of another interesting property


E' nondominated solutions. In connection with the two dimensional
~eory of cutting hyperplane, it will help us to qetermine non-

ominance of many noncomparable extreme points.


Let a(x) = (al(x) , .•• , al(x», where ai(x) , aj(x) are any two
bjectives with i ~ j, i, j = 1, ••• , l.
From the definition of nondominance we know that x*eX is non-
ominated if and only if e(x) ~ e(x*) => e(x) = e(x*) for all xeX.

Definition 5.1.7. x*eX is nondominated in (i, j) if and only if


(ei(x), ej(x)) ~ (ei(x*), ej(x*» => (ei(x), aj(x)) = (ei(x*), ej(x*)).
In other words, there is no xeX such that
(ei(x), ej(x» ~ (ei(x*), ej(x*») and (ai(x), ej(x)) ~ (ei(x*),
ej(x*» •
-159-

Definition 5.1~ x*eX is strictly nondominated in (i, j) if


there is no point xeX such that
(ei(x), ej(x)) ~ (ei(x*), ej(x*)).

Theorem 5.1.9. x*eX is nondominated if it is strictly nondominat-

ed in at least one pair (i, j). i. j = 1 •...• t.

Proof. Let x*eX is strictly nondominated in at least one (i. j).

i, j = 1, ... , t. Let there exist some xeX such that Sex) ~ S(x*).
This implies that also ei(x) ~ ei(x*) and sj(x) ~ ej(x*). We have

a contradiction. Q.E.D.

Remark 5.1.10. Theorem 5.1.9. is a generalization of one dimen_

sional domination (Le. x* 'uniquely maximizes ei(x) then x*eN). It

could be generalized into k-pair strict nondominance, 1 ~ k ~ t.

Though a strict nondominance in (i, j) is a sufficient condition

for nondominance, it is not a necessary condition. For example,

consider:

e1(xJ_ e 2cxJ e3 (x) e'+ (x)


-r
x 2 2 2 2 xl is st rictly
2 3 3 nondominated in no
x 0 0
3
x 0 0 3 3 pair (i,j) but it

x'+ 0 3 3 0 is still a non-


5
x 0 3 0 3 dominated point.
6
x 3 0 3 0
7
x 3 0 0 3
-160-

Remark 5.1.11. The previously discussed concepts may be used


for identification of Nex from some generally larger set of extreme
points which are noncomparable with each other and also with some
nondominated extreme points. If we have k of such points, notice
that i f

'.lel(xr)
A + ,~2e 2( x r) + ... + ,1,
~ e 1,(xr),

1 exists such that

... , k; j .;, r
... , k

then xr is nondominated sQ1ution. In other words, we would check

whether a feasible solution exists to the system:

1, " ' , k

J
E X. 1, ... , t.
i=l ].

If the solution exists then xr would be nondominated point.


An alternative procedure might be as follows:

1, ... , t

k
E L 1, L > 0, j 1, ... , k •
j=l J J
-161-

If the solution to the above system exists, then xr is dominated

extreme point. Any of the above approaches may be used after other,
heuristic ways of screening have been used and some of the points
still stay undetermined with respect to nondominance.
To demonstrate, let us return to the example in Section 3.3.
on Multicriteria Simplex Method. Suppose that we did not use the
nondominance subroutine and ended with all the noncomparable extreme
points:

el(x) !\2(x) e3 (x)


x2 16 24 0

x3 @ @ -16
x4 16 0 . @
x5 0 8 @
x6 5.33 21.33 5.33
x7 5.33 21.33 5.33
8
x -12 16 8
9 - 6.4 9.6 6.4
x

We do not know whether these points are dominated or nondominated.


None of these points is dominated by some other--they are all non-
comparable. First of all, points containing maximal element in any
345
particular column (circled) i.e., x , x , x eN ex Taking the sum of
all pairs of columns, point x 6 produces maximum for the last two
columns 21.33+5.33 = 26.66 and therefore x 6 , x 7 eN ex Looking at
the last two columns, notice that x 2 is strictly nondominated in
-162-

(2.3) and therefore x 2 eN . So we have established nondominance of


ex
all points except x 8 and x 9 . The dominance of these two points may
be established by solving one of the above mentioned linear programming

problems.

~2_.__Some Notes on Nonlinearity.

The methods introduced in this study are applicable only when the

assumption that the image s[X] of X is a convex set can be satisfied.

Otherwise we are facing so-called "gap problem" which will be clari-

fied in the following simple graphical analysis.

Consider a simple two dimensional case with e(x) = (al(x). e2 (x)),


X ~ En and assume that a[XJ ~ E2 is not a convex set but exhibits a

"gap" as indicated in the 'following Figure 5.2.1.

o
Figure 5.2.1.
-163-

The convex linear combination of aleX) and a 2 (x) , denoted

~ a(x) represented by ---. We can see that maximization of

~ a(x) for all ~ the part of N designated by.--,on the boundary

of a[X] while the part of N designated by'''''''''' remains undetected.


We will show that we can locate the set N even without strong

assumptions on a and X. The "gap resolution" will be based on maxi-


mizing one objective function while treating all the others as con-

straints varying within a set of parameters.

First, we shall illustrate the concept graphically. Consider

Sex) (al(x), a 2 Cx)) and let S[X] be non convex with a "gap" as in
Figure 5.2.2.
e\x)

t.
t::
c
t
t: e[N1
c
c L -
L
~ t
C != E [
-
L
C
c E '= E
t:: t: t c
I-
E 1= C
I='
E r=c
'\
l- f t: i=
0 e1 c2- c~ c~ c' e" etcx)
Figure 5.2.2.
-164-

Fix alex) = cl and solve Max a 2 (x) subject to alex) ~ c l and a[XJ.
The solution is denoted as yl. Next solve Max e 2 (x) subject to el(x)
>= c 2 , get t.1ng y1 agaln.
. Solving the same problem consecutively for
234 5
c 3 , c 4 , c S and c 6 we get Y , Y ,y and y. Notice that by solving
f or a 11 ce[cl, c6J we may compu t e the woe
h 1 se t N wh·l . ltaneous 1 y
1 e S1mu

resolving the gap.


We will show that by this technique the whole set N may be ob-

tained even under general nonlinearity conditions.

Let us define general vector maximization problem (VMP) as follows:

VMP is to find an x*, if it exists, such that

e(x) ;" e(x*) => e(x) e(x*) ,


x*eX = {xlxexo , g(x) < 0, hex) = o},
where X is a feasible set, x* is a nondominated solution, XO ~ En,

and a(x*) is a vector maximum.


We will need the following notational agreement:
I f e (x) = (e l (x), ... , ",i (x), ... , ~ l, (x)) is an l,-dimensional
vector function, then we can define e(i) (x) = (al(x), ... , ei-l(x),

ei+l(x), ... , al,(x)) which is derived from e(x) by deleting its ith
component ei(x). e(i) (x) :En~El,-l.

Let us define the following scalar maximization problem (MP):

ei(x*) = Max . ei(x) , x*ex(i) = {xlxex o , p(i) (x) ~ ~(i), xex}


xeX(1)
-165-

where "'c(i) = (1
c, ... ,ci-l c i+l , •.• , ct)
and c(i)eC = {c(i)lc(i)cE t - l , - ~< ci < ~}. Using Fritz John stationary
point necessary optimality theorem [see Mangasarian, 1969] we can state
the following necessary conditions for x*eX to be nondominated.

Theorem 5.2.2. Let XO be an open set in En. Let e:En~Et, g:En~Em,


h:En~Ek, all defined on XO. Let x* be a solution to VMP:

e(x) ~ e(x*) => e(x) = e(x*)


x, x*eX = {xlxexo, g(x) ~ 0, h(x), = a}.

Let e and g be differe~tiable at x*, and let h have continuous


·
f ~rst partia1 ·
der~vatives *
at x. Th en t here exists ,*
AieE, '*e:E"·-l,
A
r*eEm, s*eEk and ceE l - l such that the following conditions are
satisfied:

~~Vei(x*) + ~*V(e(i)(x*)_c(i»+ r*l]g(x*) + s*l7h(x*) 0


~

~*e(i)(x*) _ ~*c(i) + rg(x*) 0

*
(~i' ~ , * r*) ~ 0

*
(~i' ~ *, r*, s*) f o.

Proof. (i) Observe, if x*e:N, then x* is the solution of


Max ei(x) s.t. xeX(i) = {xexle~!~ ~ e(i) (x*)}
for each i = 1, ••• , t. Otherwise x*iN.
(ii) By replacing e~!l) by c(i) we get MP.
(iii) Applying Fritz John's theorem we get the desired
results. Q.E.D.
-166-

Given c€C, denote as MP(c) the problem of finding a point x*eX


i > i ",(i) >
such that e (x*) = e (x) for all x€X and e (x) = c; i.e., MP(c) is
defined as

ei(x*) = Max ei(x), x*eX* = {xlxexo , e(i) (x) ~ c, x€x}.


xeX*

Let us define the following:

N = {xlxex, x solves VMP}

[ = {xlx€x, x solves MP(c), cec}.


Then we can state the following:

Theorem 5.2.3. N ~ [.

Proof. Suppose x*eN and x*~[. Then there exists some Xl€x*
such that for any c, ei(xl) > ei(x*) and e(i) (xl) ~ c. We can,
however, make

1 >
c = c(x*) c.

Then, of course, MP(c 1 ) not having a solution means

which contradicts x*eN.


-167-

5.3. A Selection of the final solution.

We have suggested some algorithms which would allow us to


compute all nondominated solutions of the set X of all feasible

solutions. If the decision maker's utility (preference, trade-ofn

function is unknown or too complex to be reliably constructed, then

the set of all nondominated solutions assures that such a function

would reach its maximum somewhere in this set.

This knowledge of N might be quite helpful and sufficient to

reach an acceptable decision if the actual number of nondominated

solutions is small. F~ example, in Problem (2) of section 3.4. the


nondominated extreme points were only three out of a possible 12870

extreme point solutions. On the other hand, the number of nondominated

solutions might be too large as it is illustrated in Problem (3) of

section 3.4. In this case the final decision might still be hard to

achieve.
Ultimately the decision maker must choose a single nondominated

solution as the solution of a given problem. Many existing approaches


might be helpful to achieve this. At the end of section 3.1. we discuss
the relationship between the multicriteria simplex method and the
decomposition of A-space. It is concluded that for each xi € Nex

the corresponding A (xi) is obtained as a by-product of this method.

This 1\ (xi) represents a set of optimal weights for xi. So, we have

a very useful additional information which associates each xi with the

set of weights A € A (xi) such that A.cx is maximized at xi.


i
Obviously, the decision maker might arrive at the same x € N
-168-

Iy estimating the proper set of weights AE: fL (xi). These weights

leasure the relative importance (or attention levels) of individual

Ibj ectives.

The set of all N-solutions corresponds to the complete decomposi-

:ion of the parametric space fL. Each objective is allowed to be weight-

!d from 0 to 1 in all possible combinations. Observe that if there is

!omplete uncertainty as what the actual weights should be. i.e. all

.E:fL are considered equally plausible. then the entire set N is the re-
:ult. If the decision maker could limit the choice of weights. the

:et N could be correspondingly reduced.


1 1
Let us consider some subset ,of fL. say fL • and assume that fL £ fL •
.. e. fLl is contained in fL. By maximizing A.8ex) for all AE:fLl we
1
ilitain reduced set of nondominated solutions. say N Obviously Nl c N.
,et fL = fL 0 and N NO. Then we can recursively define fLn and calculate

~orresponding Nn for n O. 1. 2. . . such that An+l c fLn and ~+l c Nn •


Ie shall call any subset of N a set of comQromise solutions. Any reduction

,f N. which is not completely random or arbitrary. reflects particular

,tilization of an additional information the decision maker has provided .

.f the decision maker can express reliably his preference between say
. k . k .
J. x E: N. then if x J is preferred to x we can remove x J from further
:onsideration and thus reduce N. The resulting set of compromise solutions

.s then further analyzed. Such ability to express strict preference is

.ecreasing rapidly as the number of applicable criteria gets larger

han one. More often the decision maker cannot compare, does not want

:0 compare or does not know how to compare any two alternatives with

ultiple criteria consequences.


-169-

5.3.1. Direct Assessment of Weights.

In section 2 we have shown that if an appropriate set of


weights A could be assigned to sex) reliably, the corresponding
nondominated solution (s) could be safely located. It is conceivable
that even though the decision maker could not probably pinpoint
A exactly, he might be able to determine at least intervals for
its components. By recursive and interactive reduction of A the
necessary reduction of N can be achieved. In this section we
discuss some points against such an approach.
I. Human ability to arrive at an overall evaluation by weighting
and combining diverse attributes is not very impressive.
Recent psychological studies { 19 ] clearly indicate that such
weighting process is unstable, suboptimal, and often arbitrary.
It has been our conceit that ,the subtle weighting and combining
of attributes can be accomplished only by the mysterious intuitive
deliberations of human intelligence.
II. The task of a multiattribute weighting is complicated by a
fuzzy logic employed by the decision maker when facing a not fully
comprehensible problem.
It is ambitious, for example, to expect the decision maker to state
that "Ai = .42", or that ".45 < Ai < .5". More likely he would
express himself in such terms as: "Ai should be substantially
larger than .5", or "Ai should be in the vicinity of .4 but
rather larger", or some similar fuzzy statement. The newly de-
veloping theory of fuzzy set is intended to formalize such
language [27J.
III. The total number of all possible (and identifiable) criteria
or attributes is usually very large. Obviously we cannot expect
any human being to assign priority weights to the thousands of
attributes with any reliability. Yet the selection of the "most
relevant" attributes is usually achieved by applying some weighting
structure to the complete identifiable set of attributes and then
by disregarding those which have received their weight below some
predetermined threshold level.
-170-

N. To any set of priority weights, even if "correctly" estimated,


there might still correspond a large number (possibly infinite)
of equally plausible solutions. The solution maximizing implicit
utility function, x * , does not have to be an Nex solution in linear
cases, i.e. it might belong to the relative interior of some
face of polyhedron X. Then there is no set of weights which
could do anything more than to identify the face containing x * •
V. Alterations of weights reflect the fact that they are de-
pendent on a particular set of feasible alternatives considered.
So the changes in X would impute different A to Sex) even if U
is considered fixed. As discussiQn and examples in [30J suggest,
any particular weighting structure must be learned, it is not in the
independent possession of the decision maker and cannot be simply
"extracted". Priority weights should be a result of the analysis
rather than its input.

EXAMPLE. We refer to decomposition of A in Figure 2.5.1.1. We could


reduce the original Nex by recursive reduction of A as it is indicated
in Figure 5.3.l.
1

final
subset

Figure 5.3.1

a .2 .5 .6
-171-

Observe that initially we reduced A to '0 ~ Al ~ .7, 0 ~ A2 ~.6

and .3 ~ A3 ~ 1. There is no corresponding reduction in Nex - so


the additional information had no impact. Further we specified
our weights as .1 ~ At ~ .3, 0 ~ A2 ~ .5 and .7 ~ A3 ~ .9. This
leads to reduced set xl, x 2}. If we would further limit our weights
for example to .1 ~ Al ~.2,.2~ A2 ~ .5 and .75 ~ A3 ~ .9 then we
would obtain x 2 as the solution. (More precisely the face of X
containing x 2 would contain also the solution maximizing U - if
the weights chosen reflect U correctly).

5.3.2. The Ideal Solution.

We shall denote a maximum of each individual component of 8(x) as:

-i - .
Max 8. (x) 8. (x ) = 8., ~ 1, . . . , J,. (5-3-1)
x~ X ~
~ ~,

Then; = (91 , ... , 9J,) can be defined as the "ideal solution"


at which all objective functimswould attain their maximum feasible
values. So, if there would exist X€X such that 8(x) = 8, then
such solution x would be also the maximum of any increasing
utility function U. There would be no decision problem. The ideal
solution is however generally infeasible, xtx.
Because of this prominent role of the ideal solution, we can
argue that the decision maker, instead of maximizing unknown
(and possibly nonexisting) function U, is trying to find a solution
which would be "as close as possible" to the ideal solution. Such
a fuzzy statement of human purpose is probably more realistic
than maximization of U.
Let us briefly discuss some possibilities of measuring the
"closeness" to the ideal solution. The fuzziness of "as close
as possible" can be simplY interpreted and measured if only a
single dimension, the ith criterion, is considered at a time.
The degree of closeness of an xj€N to with respect to i is x
defined as

d.
~
(x j ) , 0 ~ d.
~
(x j ) ~ 1, (5-3-2)
i=l, . . . , J" j= 1 • . • • , k.
-172-

Observe that for x j ~ xiwe assign d.(x j ) = 1, the highest


~ .
degree of closeness. As the difference S.~ - . e.(x
~
J ) increases
for different xjEN we see corresponding d.(xJ ) decreasing toward
. ~ .
The assignment of d.(xJ ) is not difficult because all e.(x J )
zero.
~ . ~

is not difficult because all 9.(x J ) can be completely ordered and


~

thus the preferences are explicit. We want to capture relative


strength of these preferences in relation to the ideal point.
Aside of subjective assessment of d.(x j ) we could utilize some
~

=ormal functions:

. 9. (x j )
d. (x J )
~
= -~-­
- e. ~
(5-3-31
9i
or more complicated
8i (x j ) - ~
(5-3-4)
8i - ~i

where 9. = Min 9. (x j ), i=l, ., t. Naturally, many more functions


-~ . 1
like those iii (5-3-4) could he considered. Proper procedure would
be based on an interactive refinement and precision of some function
like those in (5-3-3) and (5-3-4).
Among some operations described [27] we find the operation of
concentration very useful:

(5-3-5)

where oC. is the power of d i (x j ). This operation reduces the


degtee of closeness relatively less for higher degrees and relatively
more for lower degrees. Similarly, the operation of dilation,
defined as in (5-3-5) for 0 < oi... < 1, has the opposite effect
than that of concentration. Observe that concentration leads to
contrast intensification (the differences between degrees are
larger) and thus effectively reduces the fuzziness.
We summarize that by combining some functional form of di(x j )
with subjective and interactive assessment via concentration and
-173-

dilation we construct fuzzy sets {e.(x j ), d.(x j )} describing the closeness


of any xjeN to x with respect to th~ ith cr~terion.
Next problem is to design similar measure of closeness with
respect to all criteria. Similar to [27J we consider one rational
interpretation of "as close as possible" the following rule:
(5-3-6)

Because of the fuzziness and the complexity of a typical


problem we should concentrate on eliminating "obviously bad"
solutions rather than on identifying the best ones. So, the
procedure (5-3-6) should reject all solutions with their degree
of closeness smaller than some predetermined level of aspiration,
say .5. An example is given ~t the end of this section.
Another natural approach would be to minimize the distance
between x j and x in geometric~l sense. Let us define the distance
as:
(5-3-7)

and use a family of Lp - metrics which provides a wide range of


geometric measures of closeness possessing some desirable properties.
They are defined as
t lip
Id~(xj)J
L (x j ) = [
P
, l;i p ~ '" (5-3 .. 8)
i=l
Jl . th
where d.1. (xJ) indicates the p - j
power of d£X ).

(5-3-9)

then ~pex is called a compromise solution with respect to p and


its criteria value image is 9lxP). It has been shown in [28J
that ~ are nondominated for I ~ p < "', and at least one xP is
nondominated for p = "'. So it is safe to replace X by N again.
Since we cannot assume all criteria to be of equal importance,
we must use more general form of (5-3-8):
-174--

J.
[I A~ ~(xj)J lip, 1 ~ p ~ =. (5-3-10)
i=l

We could disregard the power lip in (5-3-10) 1 ~ P < CD since


x
the solutions P would not be affected.
To understand the role of distance parameter p better we can
substitute w. = d. (x j ) , omit lip, and rewrite (5-3-10) as :
1. 1.

J.
=
\'
L Ai
P
wr0-1 -d i (x j ). (5-3-11)
i=l

As p increases, more and more weight is given to the largest distance.


Ultimately the largest distance completely dominates and for p = = we
get from (5-3-8) L", (x j ) = MF [di (x j ) }.
By minimizing (5-3-11) "given A and all p, 1 &c p ~ '" , we
obtain a compromise set C. Obviously C S N and thus another way
to reduce N is suggested. As -an approximation of C it is quite
sufficient to work with p = 1, 2, only. A helpful graphical
CD

insight into relation between C and N can be gained from Figure 5.3.2.

' , _ 91 (x)
x
I
I
/
/
N _J.. /___
c
_
~

Figure 5.3.2. -1
x

Example. First we demonstrate (5-3-6) on the set of nondominated so-


lutions obtained from (3-3-12). For simplicity let the degrees of
-175-

closeness be assigned according to(5-3-~).We get the following


table (5-3-12):

l~
d i (x j )
x
1
x
2
x
3
x
~
x
5
x
6

d1 (x j ) .333 1 .333 0 .111 .111

d 2 (x j ) .75 1 0 .25 .666 .666


(5-3-12)
d 3 (x j ) .5 0 1 1 .666 .666

Hin .333 0 0 0 .111 .111


i
t
Assuming that all criteria are equally important we would
1
choose x •
Similarly we could use distances (5-3-7) and employ
L - metrics. Let us assume again that all c~iteria are equally
p
important and (5-3-~) describes fuzziness correctly. Then
we get the following table of distances (5-3-13):

~
d i (xJ) x
1
x
2
x
3
x
~
x
5
x
6

d 1 (x j ) .667 0 .667 1 .889 .889

d 2 (x j ) .25 0 1 .75 .33~ .33~

d 3 (x j ) .5 1 0 0 .33~ .33~ (5-3-13)

L1 (x j ) 1.~17
CD 1.667 1.75 1.557 1.557

L2 (x j ) '751) 1 1.~~5 1.563 1.013 1.013

L", (x j ) @2) 1 1 1 .889 .889

The values of L - metrics are obtained from (5-3-11) with


p
-176-

Ai = 1 for i and p = 1, 2. Observe that xl x 2 • Obviously 2= x


and xmare approximations since we considered extreme points only
but for the data of (5-3-13) we have x2 :: x~ :; xl. So the
compromise set is approximat ed by c= {xl, x 2 }. Obviously we
have to incorporate some measures of relative criteria importance
in our analysis.

5.3.3 .Entropy as a measure of importance.

Throughout this study we have tried to indicate how multicriteria


decision making might be performed without direct assessment of
U. In section 5.3.1. we argued rather strongly against direct
assessment of weights either. Though the utility maximization is
not easily observed, people do assign priority weights no matter
how imperfectly, fuzzily or temporarily.
We suggest a methodology for determining priority weights which
would have the following properties:
(i) They would be dependent on a set of feasible
alternatives X (or N) and therefore sensitive to
any changes in X.
(ii) They would be determined objectively by
analysis of a given problem ( and thus
reflect its particular structure) as well
as in interaction with the decision maker's
subjective assessment of importance.
For simplicity let us introduce new notation for di(x j ) d. . (5-3-14)
1J
Then we construct the following table:

~
.th x
cri~erion x
1
. x
k
I
1 d ll
· d lk Dl

.
. · . (5-3-15)
.
R, di,t . · d.ek DR,
-177-

We can interpret (5-3-15) a collection of .f, fuzzy sets

(5-3-16)

which for each i provides a ranking of all xj,s in terms of close-


ness to the ideal point.
We shall attempt to offer a definition of weight as a measure
of importance:
"A weight, assigned to the ith attribute as a measure of
its relative importance for a given decision problem, is directly
related to the average intrinsic information~generated by a given
set of alternatives through the ith attribute,as well as to
its subjective assessment.!!
EXAMPLE. Let us assume that it was assessed that profit has the
highest weight of importance in a given hierarchy of criteria,
say 1 for simplicity. The an~lysis of available alternatives in-
dicated that they are all equally profitable. So the criterion
receiving the highest weight would not allow to make a decision,
it transmits no information to the decision maker. There are some
ot~ valuable criteria which were however excluded entirely be-

cause of zero weight. Our definition would assign 0 to profit auto-


matically.
The introduced definition becomes operational only if the
"average intrinsic information" can be measured. We observe that
the set N is mapped into unit interval < 0,1 > through (5-3-16).
Class of all such mappings constitutes a vector

. .. , d ) (5-3-17)
P,

To each d.
1
E d we assign a measure of its contrast intensity
or entropy, denoted by e(d i ).
In (5-3-15) observe that

i 1, (5-3-18)
-178-

which is the power of die If N is a finite set, then traditional


entropy measure can be written for ·our purpose as:

L D.
k
d ••
-K ..l:J.. (5-3-19)
j=l J..

where K > 0 and e(di ) ~ O. When all d •• are equal to each


1 J..J
other for a given i, d •• / D. • -k' and e (dJ...) takes on its maximum
J..J J..
value , say e max • Obviously e max = in k. So if we set K = l/e max ,
then 0 ~ e(d.) ~ 1 for all d.€d. Such normalized entropy measure
J.. J..
is useful for comparative analysis.
We also introduce total entropy of N, defined as
J
E = I e(di )· (5-3-20)
i=l

Then a measure of contrast intensity of the ith attribute


is defined as:

(5-3-21)

Observe that by reducing N we could shift the ideal solution


and thus change d •. 's and then of course also e(d.) , E and 1.'s.
J..J J.. J..
So we can evaluate whether such reduction increases contrast in-
tensity and thus adds decision relevant information. Similarly
we can study the influence of adding or deleting any particular
criterion. We can test any number of additional criteria because all
components of N will stay nondominated no matter how much J is
increased. We might try to find such a combination of criteria
which would give us the highest overall value of contrast intensity.
If we denote the subjective assessment of importance of the ith
attribute as w., dependenfnsocial, cultural, traditional and environ-
J..
mental influences, then we can express our definition of a weight
of importance as

N l, ...
)". = A.. w. / !: )"i w. i 1, ... , J, • (5-3-22)
J.. J.. J.. i=l J..
-179-

{AMPLE, Let us assume that we have three criteria Si which


~re assigned subjective weights wi' There are four different
lternatives available (nondominated).Relevant numerical values
~e summarized in (5-3-23):

xj
1 2 3 4-
wi S.1 (x j ) x x x X

.8 91 (x j ) 7 8 8.5 0)
.1 92 (x j ) @ 60 20 80 (5-3-23)

.1 9 (x j )
3
4- 4- CD 2

Encircled are values of the ideal solution. Let us express


~grees of closeness simply by using d. (x j ) = ll/e.) 9. (x j ) = d ..
1 1 1 1J
'om(5-3-3).Then we construct numerical equivalent of (5-3-15)
1 (5-3-24-):

1 2 3 4- "
x x x x L
1 .778 .889 .94-4- 1 3.611

2 1 .6 .2 .8 2.6 (5-3-24-)
3 .667 .667 1 .334- 2.668

Next we calculate e (d i ) according to (5-3-19) , K = e


max
In 4- 1.3863.
Ie calculations are given in table (5-3-25):
d.1J'ID i dij/ Di ( 1n d"1J ID i )
1 2 3 1 2 3
xl. .216 .385 .25 -.331 -.367 -.34-7
2
x .24-6 .231 .25 -.34-5 -.338 -.34-7
3 (5-3-25)
x .261 .007 .375 -.350 -.197 -.368
4-
x .277 .307 .125 -.356 -.363 -.260

I 1 1 1 -1. 382 -1.265 -1. 322


-180-

We get

and E = 2.864. Then ~l ~ .022.12 = .64 andA 3 = .338 indicate


relative contrast intensities measuring intrinsic average
information transmitted by each attribute. The weights of im-
portance to be assigned are:

\1 = .153. ~2 = .555, \3 = .292.

Let us use simple additive weight criterion to evaluate alter-


.....
natives in (5~3-24) with both wi's and \i'S (and also ~i's):
1 2 3 4
x x x x
t
.~Wi
d ..
1.J
.789 .838 .875 @
1.=

t
I
i=l
A. d ..
1. 1.J
(§) .664 .547 .695

Ior ..
i d 1.J @ .628 .486 .647
i=l

4
So the traditional approach would recommend x while our
method indicates xl to be the best solution.

5.3.4. A Method of Displaced Ideal.

We have described several ways of reducing the set N. Let us


summarize their main features:
(i) If the decision maker can express reliably a strict prefe-
rence between any two elements of N. then the "unpreferred"
solution can be removed.
-181-

(ii) In the framework of linear programming the


decomposition of the parametric space of
weights is available. Fuzzy assessment of
possible weight intervahthen leads to
reduction of N.

(iii) We can transform outcomes into degrees of


closeness to the ideal solution with respect
to a single criterion and then retain only
solutions with the degree of closeness with
respect to all criteria exceeding predetermined
aspiration level.

(iv) Find a compromise set C which is the subset


of N of all solutions closest to the ideal
solution with respect to one or more L - metric.
p
(v) Using the entropy measure of importance we
can discard those criteria which manifest low
contrast intensity and therefore receive low
weight of importanc€. Corresponding decrease
in the number of cr,iteria considered could lead
to the reduction of N.

The net result of discarding some elements of N (using any


of the above approaches) is the corresponding displacement of the
ideal point. Originally we subsitute X by N and considered all
solutions from X-N infeasible. The ideal solution for both X and
N is identical. The situation changes however when N is further
reduced to Nl (or C). By this we construct new feasible set with
some of the original components of the ideal solution removed. Thus
the ideal solution is displaced closer to the new feasible set.
Since all our analytical information, etc. has been determined
in dependency on the ideal solution, they must be all recalculated
and reevaluated beca~se of the displacement. We see that dynamic,
selfadjusting, interactive and iterative procedure can be designed.
Its iterative property of convergence to a single solution can be
best demonstrated graphically. Let us assume for simplicity that
only L - metric criterion is used for the reduction of N. We
p
also assume that the outcomes have already been scaled into intervals
< 0,1 > for all criteria. We may start with simplified two
criteria situation as in Figure 5.3.3.
-182-
"" /
9(x)
/

/
/
""
""
/
/

"
/
/
/
/
/

e(X)

" '\.
""
Figure 5.3.3.
"

Because each displacement of the ideal solution will result


in reevaluation of weights and because of combined effect of all
reduction methods we are most likely to observe more general path
of movement of the ideal solution.
Though the iterative process converges to a single point it
would not be wise to force the technique to such an extreme. The
displacements of the ideal solution will become progressively
smaller reflecting smaller returns for additional bits of infor-
mation. Rather than seek for a single solution and thus impose
normative inflexibility on the decision maker, the procedure would
stop whenever resulting compromise set contains "few enough" alter-
natives to allow for the final decision to be made. Since by
adding more criteria the nondominance of any alternative is not
violated, these "chosen few" can be safely evaluated with respect
to any social, legal, moral, aesthetic and other aspect of this
complex world.
-183-

BIBLIOGRAPHY

1. Arrow, K.J., Barankin, E.W. and Blackwell, D., "Admissible Points


of Convex Sets", In: H.W. Kuhn, A.W. Tucker (eds.), Contributions
to the Theory of Games, Princeton University Press, Princeton,
New Jersey, pp. 87-91, 1953.

2. Charnes, A. and Cooper, W.W., Management Models and Industrial


Applications of Linear Programming, Vols. I and II, John Wiley
&Sons, New York, 1961.
3. DaCunha, N.O. and Polak, E., "Constrained Minimization Under Vector-
Valued Criteria in Finite Dimensional Space", Journal of Math Analysis
and Applications, Vol. 19, pp. 103-124, 1967.

4. Gal, T. and Nedoma, J., ''Multiparametric Linear Programming",


Management Science, Vol. 18, No.7, March 1972, pp. 406-421.

5. Geoffrion, A.M., "Solving Bicriterion Mathematical Programs",


Operations Research IS, pp. 39-54, 1967.

6. Geoffrion, A.M., "Proper Efficiency and the Theory of Vector Maxi-


mization", Journal of Mathematical Analysis and Applications, Vol. 22,
pp. 618-630, 1968.

7. Hadley, G., Linear Programming, Addison-Wesley, Reading, Mass., 1961.

8. Klahr, C.N .. "Multiple Objectives in Mathematical Programming",


Operations Research, Vol. 6, No.6, pp. 849-855, 1958.

9. Klinger, A., "Vector-Valued Performance Criteria", IEEE Transactions


on Automatic Control, pp. 117-118, 1964.

10. Koopmans, T.C., "Activity Analysis of Production and Allocation",


Cowles Commission for Research in Economics, Monograph No. 13,
John Wiley &Sons, New York, 1951.

11. Kuhn, H. W. and Tucker, A. W., "Nonlinear Programming", Proceedings of


the Second Berkeley Symposium on Mathematical Statistics and Proba-
bility, pp. 481-492, University of California Press, Berkeley,
California, 1951.

12. MacCrimmon, K.R., "Decision Making Among Multiple-Attribute Alterna-


tives: A Survey and Consolidated Approach", Memorandum RM-4823-ARPA,
December 1968, The Rand Corporation, Santa Monica, California.
-181+-

13. Manas, J. and Nedoma, J., "Finding All Verti ces 0 f a Convex
Polyhedron," Numerische Mathematik, Vol. 12 (1968), pp. 226-229.

14. Mangasarian, O.L., Nonlinear Programming, McGraw-Hill, New York,


1969.

15. Markowitz, H., "The Optimization of Quadratic Function Subject


to Linear Constraints," Naval Research Logistics Quarterly,
Nos. 1 and 2, March and June, 1956, pp. 111-133.

16. Pareto, V., "Course d'Economic Politique," Lausanne, Rouge, 1896.

17. Raiffa, H., "Preferences for Multi-Attributed Alternatives,"


Memorandum RM-5868-DOT/RC, April, 1969, The Rand Corporation.

18. Raiffa, H., Decision Analysis, Addison-Wesley, 1970. "

19. Shepard, R.N., "On Subjectively Optimum Selection Among Multi-


attribute Alternatives." In: Maynard W. Shelly, III, and Glenn
W. Bryan (Eds.), Human Judgement and Optimality, John Wiley &
Sons, 1964.

20. Von Neumann, J. and Morgenstern, 0., Theory of Games and Economic
Behavior, Princeton University Press, 1953.

21. Stoer, J. and Witzga11, C., Convexity and Optimization in Finite


Dimensions I, Springer-Verlag, 1970.

22. Yu, P.L., "The Set of All Nondominated Solutions in Decision


Problems with Mu1tiobjectives," Systems Analysis Program, Working
Paper Series, No. F71-32, University of Rochester, September,
1971.

23. Yu, P.L., "Cone Convexity, Cone Extreme Points and Nondominated
Solutions in Decision Problems with Mul tiobj ectives," Center for
System Science, 72-02, University of Rochester, Rochester, New York,
1972.

24. Yu, P. L. and Zeleny, M., "The Set of All Nondominated Solu-
tions in Linear Cases and A Multicriteria Simplex Method.,"
Center for System Science, CSS 73-03, University of Rochester,
1973.
25. Yu, P. L. and Zeleny, 1'1., "On Some Nulti-Parametric Programs,"
Center for Systems Science, CSS 73-05, University of Rochester,
1973.
25. Zadeh, L. A., "Optimality and Nonscalar - Valued Performance
Criteria," IEEE Transactions on Automatic Control, AC-8 (1953)1,
59-50.
-185-

27. Zadeh, L.A., "OUtline of a New Approach to the Analysis of


Complex Systems and Decision Processes," In: Multiple Criteria
Decision Making, Columbia: USC Press, 1973.
28. Zeleny, M. and Cochrane, J.L. (eds.), Multiple Criteria Deci-
sion Making, Columbia, S.C.: The University of South Carolina
Press, 1973, p.816.

29. Zeleny, M. and Cochrane, J.L., "A Priori and a Posteriori Goals
in Macroeconomic Policy Making," In: Multiple Criteria Decision
Making, Columbia: USC Press, 1973.
30. Zeleny, M., "Compromise Programming," In: Multiple Criteria De-
cision Making, Columbia: USC Press, 1973.
31. Zeleny, M., "A Selected Bibliography of Works Related to the
Multiple Criteria Decision Making," In: Multiple Criteria Deci-
sion Making, Columbia: USC Press, 1973.
-186-

APPENDIX
-187-

Al. A Note on elimination of redundant constraints.

We have noted that finding a nondominated extreme points of X,


via direct decomposition of the parametric space, can be efficient
only if nonredundant constraints of individual A(x~ can be identified.
Each A (xi) is defined by as many linear inequalities as there are

nonbasic variables in the simplex tableau. For instance, in the


example of Section 2.5.1, we have seven inequalities for a poly-
hedron in two dimensional space. Many of these constraints are
redundant and their identification might not be trivial. Linear

programming subroutine, such as the one in (2-4-1) of Section 2.4,


has to be employed. Introducing redundant constraint might lead
to a dominated basis. Even if our goal is just a decomposition of
the multiparametric space A, through the identification of adjacent
decomposition polyhedra, nonredundant constraints must be identified
and thus procedures of this type are generally inefficient, see e.g.

[Gal, Nedoma, 1972J.

Since the simplex method does not require a careful elimination


of redundant constraints, but disregards them automatically, relatively
little attention has been devoted to this problem.
-188-

One exception in the literature on linear programming is an article


*]
by J. C. G. Boot (1962), dealing with some techniques for identification
of redundant constraints. However, the method is not suitable for our
purpose because we will have to use more refined definitions of redundancy
in linear constraints.

Let the feasible set X is generated by a set of m constraints


A.x.$. b.,
1 - 1

X ={XIXEEn, A.x .$. b .• i


1 ,.... 1
= 1•...• m}.

Dl. A constraint. say constraint 1, is inessential if, and only if, for
all x such that

2 •.•.• m

D"2. A constraint, say constraint 1. is strongly redundant if. and only if,
for all x such that

A.x.$.b
1 - i
.• i 2•...• m

*JJ.C.G. Boot: On Trivial and Binding Constraints in Prograrruning


Problems, Management Science, 8 (1962) 4, 419-441.
-189-

D3. Constraint 1 is strongly redundant if, and only if, there exists no
vector x such that

Ai X ~ bi , 2, ... , m

and Alx ~ bl .

Remark. The above definition may be restated as being true if there is


no x such that

Ai X ~ bi ' 2, ... , m

and Alx = bl .

We shall call an inesse~tial constraint which is not s-redundant to be


a weakly redundant constraint.

D~. A constraint, say constraint 1, is weakly redundant if, and only if,
there exists x such that

Ai X ~ bi , Alx = bl

and there is no x such that

Ai X ~ bi , Alx > bl

Note. The first part of this definition assures no s-redundancy; the second
part the inessentiality.
In the traditional LP analysis all redundant constraints are dispensable.
For the purpose of our problem only s-redundant constraints are superfluous.
All essential and w-redundant constraints are operative in our computations.
-190-

We introduce some graphical examples to clarify the concepts. The


led area is Aix ~ bi • i = 2•...• m and the constraint 1 is being
ored.

essenti a1

essenti al essenti al
-191-

D5. Constraint 1 is essential if, and only if, there is some x for which

A.x .:;. b., 2, .•. , m


1 - 1

Note. If the set {XIXEEn; A.x ~ b., 2, ... , m} ¢, then the constraint
1 - 1

1 is inessential and dispensable.

To determine whether constraint is s-redundant, we check whether


the system

possesses a feasible solution. If it does, then 1 is essential or w-redun-


dant; if it does not, 1 is s-redundant.
So we form

= b1

where Yi's are slack variables and Yl is an artificial variable. Solve

s. t.

2, ... , m.

If there are some constraints of the form ~ or = among the i=2, ... , m
constraints, then corresponding artificial variables must be appended.
-192-

Before solving the above problem we try to reduce the number of


constraints entering the computations.
Strongly redundant constraints can be dominant and nondominant.

D5. A strongly redundant constraint, say constraint 1, is dominant if,


and only if, the axes intercepts of Alx = bl are greater than those of
some A.x
1
~
-
b.,
1
i = 2, ... , m (or if the axes intercepts of Alx = bl are
smaller than the corresponding intercepts of some constraint A.x
1
~
-
b.,
1

i = 2, ... , m). Otherwise, the constraint is nondominant.

Remark. All w-redundant constraints are nondominant.


We can discard all dominant s-redundant constraints by using the
following simple method:
1. Consider all constraints ~Iith bi>O, 1, ... , m. These can be divided
into two groups: ~ and ~.

2. Divide each constraint in both groups by the corresponding b .. Get


1
thus new coefficients Ci. •• = a . ./b , i = 1, ... , m.
lJ lJ
3. a. Group~: if for any two constraints, say 1 and 2, Ci. lj < Ci. 2j ,
j = 1, ... , n, then the constraint 1 is s-redundant and dominant.
b. Group~: if for any two constraints, say 1 and 2, Ci. 1j > Ci.2j ,
j = 1, ... , n, then the constraint 1 is s-redundant and dominant.
4. Delete all s-redundant dominant constraints.

We are left with constraints having b.


1
= 0, equality constraints,
w-redundant, essential and s-redundant nondominant. All these will enter
further computations.
-193-

We can graphically clarify the concepts:

dominant nondominant
s-redundant w-redundant

dominant
s-redundant

essential dominant

)nstraints s-redundant

w-redundant
-194-
Example. Consider the following set of constraints and discard all
s-redundant constraints.

l. 2xl + 2x 2 ~ 4

2. 3x l + x2 ~ 3

3. 5 3 7
Yl + 2 x2 ~ 2

4. lx
2 1
+ x2 ~ 3

5. 5 3
2 x l + 2x2 ~ 4

First. let us delete all dominant constraints by dividing all constraints


by their right hand sides:

1 1 < 1
2 x l + -x2
2 -

1
1xl + "Ix2 ~ 1

5 3
JX 1 + -x <
7 2=
1

1 1
2 x l + "Ix2 ~ 1

ix
8 1
+ 1<2 < 1
8 =

We can see that (i. t) and (~. ~) have both components smaller than (~. t)
so the last two constraints may be deleted.
Consider

2x l + 2x2 ~ 4

3xl + x2 ~ 3
5 3 7
2 x l + 2 x 2 ;;"2·
-195-

Check for the 1 constraint:

- 2 2 4 x2 1 2
Yl 0 0 0 0
"2
3 0 0 3 2 0 _1 0
Y2 Y2 "2
5 3 0 0 7 0 3 0
1
Y3 "2 "2 2 Y3 -4" "2
-2 -2 0 0 0 -4 0 0 0 0 0

The solution exists; constraint 1 is essential.

Check for the 2 constraint:

Yl 2 2 0 0 4 4 _2 0 2
Yl 0
3 3
- 3 0 0 3 1 0 1 0
Y2 xl 3 3
5 3 7 2
Y3 "2 "2 0 0
2 Y3 0
3 0
-%
-3 -1 0 0 0 -3 0 0 0 0 0

The solution exists; constraint 2 is essential.

Check the 3 constraint:

2 2 0 0 4 4 2 0 2
Yl Yl 0
3 -3
3 0 0 3 1 0 1 0
Y2 xl 3 3
-
Y3
5 3 0 0 7 -Y3 0 2 0 5
-6"
"2 2 2 3
5 3 7 2 0 5 0 -1
"2 "2 0 0 0 -2 0 -3 6"
-196-

Yl 0 0 -2 0 The solution exists; actually the 3rd

0 3 1 1 constraint is w-redundant since the other


xl 0
4 -2 2
5 3 3 two constraints are satisfied as equalities.
x2 0 0 -4 2 2
0 0 0 0 0
-197-
A2. Examples of Output Printouts.

Problem (1 l.

X3 + 3x 4 + 2xS

v-Max x3 + 2x 4 + 3x S + x6

subject to:

X1 + 2x2 + x3 + x 4 + 2xS + x6 + 2x 7 ;;:; 16


-2x 1 - x2 + x 4 + 2xS + x 7 ;;:; 16

-Xl + x3 + 2xS - 2x7 ~ 16


x 2 + 2x 3 . - x4 + x S-2x 6 x7 ~ 16

x.J. ~ 0, i=l , , 7

_________________ EXECUTION
43
******

00 2.00000 1.00000 1.00000 2.00000 1.00000 2.00000 It.OOOOO


DO -1.00000 0.00000 l.00000 2.00000 0.00000 I.OOOOO It.OOOOO
o.oecoo 1.00000 0.00000 2.00000 0.00000 -2.00000 16.00000
00
00 1.00000 2.00000 -1.00000 1.00000 -2.00000 -1.00000 16.00000
00 -2.(0000 1.00000 -3.00000 -2.00000 0.00000 -1.00000
00 -1.00000 -1.00000 -2.00000 -3.00000 -1.00000 0.00000
00 0.00000 -1.00000 1.QOOOO 0.00000 1.00000 1.00000
-198-

BASIS VALUE
r; 0.79999998E 01
9 0.00000000E-38
10 O.OOOOOOOOE-38
11 O.H999998E 01

VALUES OF T~t OEJECTIV[ FUNCTIONS


-O.1600000GE 02 -O.2~OQOOOOE 02 O.00000000E-38
-0.40000000E 02

BASIS VALUE
1 0.16000000E 02
9 O.48000000E 02
10 O.32000000E 02
11 o .160Q0000E 02

VALUES UF THt OEJECTIYE FUNCTIO~~


-0.1600000CE 02 O.OOOOOOOOE-38 -0. ltOOOOOOE 02

-0.32000000£ OZ

tlASIS VALUE
1 0.79999998E 01
'1 O.32000000E 02
10 0.16000000E 02
3 0.7'l999998E 01

VALUf::S UF THE OBJECTIVE FUNCTIONS


O.OOOOOOOGE-38 -O.79999998E 01 -O.16000000E 02
-0.24000000E 02

E!ASIS VALUE
4 0.53333:::;3E 01
9 o .10U:66C7E 02
10 o .533>33333E 01
3 0.10666667E 02

VALUES OF THE OEJ ECT IV E FUNCfIO"JS


-O.53333332E 01 -O.21333333E 02 -O.53333333E 01

-C:.32000000E 02

BASIS VALUf
1 0.00000000E-38
9 0.5J33B32E 01
.5 O.5333::333E 01
3 o .53;33330E 01
-199-

VALUES OF THE OBJECTIVE FUNCTIONS


-O.53333336E 01 -O.213~3333E 02 -0.53333330£ 01
-0.32000000E 02

fASIS VALUE
1 0.00000000E-38
9 0.00000000E-38
5 0.79999998E 01
11 0.79999995E 01

VALUES Of THE OEJECTIVE FUNCTIONS


-0.16000000E 02 -0.24000000E 02 0.00000000E-38
-0.40GO(lCOOE 02

EASI!) VALUE
10, o .16000000E 02
3 0.00000000E-38
4 0.16000000E 02
11 0.319<;9999E Of

VALUES OF THE OEJECTIVE FLNCTICNS


-0.4800000CE 02 -0.~2000000E 02 C.ltOCCCOOE 02
-0.64000001E 02,

BASIS VA LlJE
io 0.16000000E 02
1 0.00000000E-38
4 0.16000000E 02
11 0.319~9999E 02

VALUES OF HIE OEJECTIVE FUNCTIONS


-O.48000000E 02 -0.32000000£ 02 0.16000000E 02
SISSYS -,,).04{)OC~::lE 02
SISSY!)
$I BSY!)
SISSY;;'
)IlE TIME 25083 TO Tt.L TIMe
:CT PROG 5189 DATA STORflG -
ceRE 336 SYMBOL T ABl E 3128
-200-

Problem ( 2) •

r
+ SX 2 + x3 - x 4 + 6xS + BX 6 + 3x 7 - 2x B

v-Max SX l - 2x 2 + SX 3 + 6xS + 7x 6 + 2x 7 + 6x B

Xl + x2 + x3 + x4 + Xs + x6 + x7 + xB

subject to:

Xl + 3x 2 - 4x 3 + x 4 Xs + x6 + x7 + xB t,; 40

SX l + 2x 2 + 4x 3 x 4 + 3x S + 7x 6 + 2x 7 + 7x B ~ B4

4x 2 x3 - x4 - 3x S xB f 1B

-3x 1 4x 2 + BX 3 + 2x4 + 3x S - 4x 6 + SX 7 - xB ~lOo

12x 1 + BX 2 x) + 4x4 x6 + x7 ~ 40

Xl + x2 + x3 + x4 + Xs + x6 + x7 + x B ?; 12

BX 1 .,. l2x 2 - 3x; + 4x 4 Xs f 30

-Sx 1 - 6x 2 +12x 3 + x4 x7 + x B &100

X. ~ 0 , i 1, ••• , 8
~

--------------- EXECUTION
0200

'00 3.COOO/) -4.00('00 1.0COCD -1.COOOO 1.0COOO 2.00000 4.00000


00
00 2.COOOO 4.000CO -1.00000 3.COOOO 7.0COOO 2.00000 7.00000
00
00 4.COOOO -l.!J'JOCO -1.0COOO -3.COOOO O.OOCOO 0.00000 1.00000
00
00 -4.00000 8.COOCO 2.0GOQO 3.0COCO -4.000GO 5.00000 -1.00000
00
00 8.COCOO -1.0C,jGO 4.00COO a.oeoco 1.00000 1.00000 0.00000
00
00 1.COCOO 1.·)00eO 1.00000 1.000CO 1.00000 1.00000 1.00000
00
00 -12.COCOO -3.0'2000 4.00000 -1.0~OCO 0.00000 0.00000 0.00000
00
00 -6.COGOO 12.00000 1.0CO(JO O.CCOCO 0.00000 -1.00000 1.00000
00
00 -5.eOGOO -1.000eo 1.0COCO -6.00000 -8.0COGO -3.00000 2.00000
00 2.00000 -5.000CO O.OCOOO -6.00COO -7.00000 -2.00000 -6.00000
00 -l.COCOO -1.00000 -1.0(;000 -1.00COO -1.00000 -1.00000 -1.00000
-201-

BASIS VALUE
9 O.64444444E 02
14 O.231111llE 02
11 o .ll266H7E 03
4 o .53333333E 01
13 O.lSe:666f6E 02
5 O.29177718E 02
15 0.38444444E 02
16 o .94Ht665E 02

VALUES OF THE OBJECTIVE FUNCTIONS


-0.17333333E 03 -O.1186t666E 03 -0. 35111111E 02
-0.38711111E 03

BASIS VALUE
9 O.58tllllOE 02
14 o .26t l1lllE 02
11 0.l1033333E O!
4 O.76666665f 01
2 O.lH66667E 01
5 O.291771l7E 02
15 0.43111110E 02
16 0.99333330E 02

VALUES OF THE CBJECTIVE FUNC T IONS


-O.17683333E 03 -O.11633333E 03 -0.38611111E 02
-0.39177777E 03

BASIS VALUE
9 O.575t02E2E 02
14 O.27347517E 02
11 O.1l30t383E 03
4 0.97021274E 01
6 0.1l914894E 01
5 O.28453900E 02
15 O.19645390E 02
16 O.90297869E 02

VALUES CF THE OBJECTIVf FUNCTIUNS


-O.17055319E 03 -O.17906383E 03 -O.39347517E 02
$II3SYS -O.38896453E 03
COMPILE TIME 25900 lTOTAL TIME 48866 MILLIS@
OBJECT PROG 5890 OAT A STORAGE 3656 AVAILABLE
CORE 46 SYplBOL TABLE 3128
-202-

1\3 The Program description and FORTRAN Printout.

Definitions.

NOV - number of variables


NOB - number of objective functions
NOC - number of constraints
KUSE - number of bases used
NOEZ (=NOB+1 )the composite objective function is in-
cluded
KAUX - number of auxiliary bases stored
TAB - tableau array
C - objective functions array (initial)
Z - objective fun~tions array
IB - status array of each variable (O=out of basis,
1 =in the basis)
ID - list of basic vectors
V - value of basic vectors
X - value of objective functions
INN - list of bases previously searched
IAUX - list of auxiliary bases
-203-

List of Subroutines.

(1) PRINIT
This prints out the simplex tableau and Z array i f re-
quested.
(2) OBJVAL
This computes the value of each objective function.
(3) CHCOMP
This computes each element of the objective functions
array, Z.
(4) REMOVE
This performes the pivot transformation and changes
the basis.
(S) TEST
This tests whether any objective function is at its
maximum and whether ,alternative maxima exist which
would dominate the current basis.
(6) TEST 2
This tests whether a proposed basis has been used pre-
viously, and stores it in the list of bases previously
searched, i f so instructed.
(7) TEST 3
This tests whether a proposed basis has been placed in
the auxiliary list previously and, i f not, stores it.
(8) fORBAS
This selects the auxiliary basis "closest" to the curr-
ent basis.
(9) DROP
This is used by fORBAS to delete a chosen auxiliary ba-
sis from the auxiliary list.
(10)TEST 4
This is used by fORBAS to test whether the chosen auxi-
liary basis has been used previously.
-204-

(11) [HOS
This is used by FORBAS to select the auxiliary basis
"closest" to the current basis.
(12) FEASBL
This tests if the current basis is nondominated and,
if it is, stores it in the file of nondominated bases.
(13) REMOV
This is used by FEASBL to pivot in the subproblem ta-
bleau, used in FEASBL
-205-
C THIS IS THE ~AIN SUBROUTINE FOR A ~uLf!OBJECTIVE LINEAR
C PROGRA~"ING PROBLE~.
C THE NU~BER OF CONSTRAINTS IS LIMITED TO 12, THE NUMBER OF
C OBJECTIVE FUNCTIONS TO 7, AND THE NUMBER OF VARIABLES
C INCLUDING ALL SLACK AND ARTIFICIAL VARIELES TO 40
C NOVI NUMBER OF VARIABLES
C NOCI NUMBER OF CONSTRAINTS
C NOBI NUMBER OF OBJECTIVE FUNCTIONS
C NOBZt COMPOSITE OBJECTIVE FUNCTION NUMBER'NOB & 1
C
C WHEN THE CONSTRAINT IS LESS THAN OR EQUAL, IEQ%J< I 0
C WHEN THE CONSTRAINT IS EQUAL, IEQIJ< I 1
C WHEN THE CONSTRAINT IS GREATER THAN OR EQUAL, IEQ%J<t 2
C
COMMON NOV,NOC.NOB,KAUX,KUSE,NOBZ,NOV1,TAB(12,40),V(12).
1C(8,40, ,Z(8,40) ,X(8) ,10(12) ,IB(40) .IEQ(12) ,T(12,40) ,INN(300,3),
2IAUX(200,3) ,FIIT(12)
COMMON/NDOMP/ NONDP
DIIIENSION VC(8)
DIMENSION 111(40)
REWIND 9
NONDP=O
READ (5,900) NOV,NOC,NO'B
900 FORMAT (212,11)
WRITE (6,990) NOV,NOC,NOB
990 FORMAT (5X,2I2,I1) ,
NOV1=NOV
DO 5 1=1,8
DO 5 J=1, 40
5 C{I,J)=O
DO 6 1=1,12
10(1)=0
X(I)=O.O
DO 6 J=1,40
IB (J) =0
Z (I, J) =0. 0
T(I,J)=O.O
6 TAB(I,J)=O.O
KAUX=O
KUSE=O
NOBZ=NOB+1
READ (5,901) rEQ
901 FORMAT (2011)
WRITE (6,991) IEQ
991 FORfHT (SX, 2011)
READ (5,902) FMT
902 PORMAT(18A4)
WRITE (6,993) PMT
993 FORMAT(SX,18A4)
DO 9 I=1,NOC
REAl2&<A5,FIIT) (TAB (I,J) ,J=1,NOV), V (I)
9 WRITE (6,992) (TAB(1.J),J=1,NOV),V(I)
9n PORIIAT (5X, 8P10. 5)
DO 10 I=1,NOB
READ (5,FMT) (C(I,J),J=1,NOV)
10 WRITE (6,992) (C (I,J) ,J::1 ,NOV)
DO 11 I=1,NOV
-206-
DO 11 J=1,lfOB
11 C(NOBZ,I)=C(NOB~,I}+C(J,I)
IfOV1=lfOV
NOA=O
K=1
CALL PRINIT
DO 15 J=1,NOC
IF(IEQ(J).EQ.1) GO TO 15
IF(IEQ(J).EQ.O) GO TO 13
NOV1=NOV1+1
TAB(J,NOV1)=-1.0
GO TO 15
13 NOV1=NOV1+1
TAB(J.NOV1)=1.0
IBCNOV1)=1
ID (J) =NOv1
K=K+1
15 CONTINUE
NOV2=NOV1
NOV=NOV1
DO 20 I=1,NOC
IF(IEQ(I).EQ.O) GO TO 20
NOV1=NOV1+1
IB(NOV1,=3
'l'AB(I,lfOV1) =1.0
ID(I)=NOV1
K=K+1
20 CONTINUE
IF(NOV1.EQ.NOV2) GO TO 66
NOV3=NOV2+1
NOM=NOV1-NOV2
NOC1=NOC-NOM
NOC2=NOC+1
DO 60 I=1,NOM
DO 35 J=1,NOV2
DO 35 K=1,NOB
DO 30 11=1,NOC
IF{ID{I1).GT.NOV2) GO TO 25
IDI1=ID(I1)
T(K,J)=T(K,J)+V(I1)*C(I{,IDI1)
GO TO 30
25 Z(I{"J)=Z(K,J)+V(I1)
30 CONTINUE
T (K ,J) =T (I{,J) -C (I{ ,J)
35 CONTINUE
IJ1=0
DO 45 J=1,NOV2
IF (IJ1-.EQ.1) GO TO 45
IF(IB(J).NE.O)GO TO 45
IK1=0
DO 40 K= 1.lfOB
IF (IK1. NE. 0) GO TO 40
IF (Z(K,J).GT.O.O) GO TO 40
IF (Z(K,J).NE.O.O) GO TO 38
IF (T(K,J).GT.O.O) GO TO 40
38 IK1=1
40 CONTINUE
IF (IK1. NE. 0) GO TO 45
-207-
JJ=O
AMIN=100000
DO 43 K= 1 , NOC
IF (TAB(K,J).LE.O.O) GO TO 43
VV-=V (K) /11B (K,J)
IF (VV.GT.AMIN) GO TO 43
Al'IIN-=VV
JJ-=K
43 CONTINUE
IF (JJ. EQ. 0) GO TO 45
IDJ-=ID(JJ)
IF (IDJ.LE.NOV2) GO TO 45
IJ=J
IJ1=1
45 CONTINUE
IF (IJ1.NE.0) GO TO 46
WRITE (6,904)
904 FORMAT (5X,30H THERE IS NO FEASIBLE SOLUTION)
STOP
46 ICOL=IJ
IROW-=JJ
DO 50 J=1,NOC
IF (ID(J) .LE. NOV2) G,O TO 50
IF (TAB{J,I~ .LE.O.O) GO TO 50
IROW=J
50 CONTINUE
CALL REMOVE (IROW,ICOl)
ID(IROW)-=ICOL
IBCICOL)=1
DO 51 K= 1,8
DO 51 J=1,40
51 T(K,J)=O.O
60 CONTINUE
C AT THIS POINT ALL ARTIFICIAL VARIABLES HAVE BEEN RF.MOVED
NOl'l=O
DO 65 I=l,NOC
DO 65 J=NOV3,NOV1
65 TAB(I,J)=O.O
66 CONTINUE
NOV1=NOV2
CALL OBJVAL
CALL CHCOMP
CALL PRINIT
CALL TEST'
fi7 11=0
IJ=O
IMIN=O.O
DO 75 I=l,NOV
IF (IB(I) .GE.1) GO TO 75
IJ=O
IF (Z (NOBZ, I) • LE. O. 0) GO TO 75
IF (Z (NOBZ,I) • LT. AMIN) GO TO 75
DO 70 J=l,NOB
IF (IJ. NE.O) GO TO 70
IF (Z(J,I).GE.O.O) GO TO 70
IJ=1
70 CONTINUE
IF (IJ. NE.O) GO TO 75
-208-
AKIN=Z (NOBZ,I)
11=1
75 CONTINUE
IF (II.EQ.O) GO TO 85
AKIN=.1E10
IJ=-1
DO 80 I=1,NOC
IF (TAB(I,II» 80,80,78
78 AK=V(I)/TAB(I,II)
IF (AK.GE.AKIN) GO TO 80
IJ=I
AKIN=AK
80 CONTINUE
IF (IJ) 81,81,82
81 WRITE (6,905)
905 FORMAT (11sX,19H UNBOUNDED SOLUTION)
STOP
82 IROif=IJ
ICOL=II
CALL TEST2(IROW,ICOI,O)
IF (IB(ICOL).EQ.5) GO TO 67
CALL REMOVE (IROW,ICOl)
GO TO 66
85 IF (NOB. EQ.1) STOP
GO TO 186
86 CONTINUE
C AT THIS POINT THERE IS NO VECTOR THAT IMPROVES ALL OBJ. FUNC.
DO 110 I=l,NOV
IF (IB (I) • EQ. 1) GO TO 110
AKIN=.lEl0
11=0
DO 100 J=l,NOC
IF (TAB(J,I}.LE.O.O) GO TO 100
VV=V (3) ITAB (J ,I)
IF (VV.LT.O.O) GO TO 100
IF (VV.GT.AKIN) GO TO 100
AKIN=VV
11=3
100 CONTINUE
IF (II. EQ. 0) IB (I) =2
110IK(I)=II
DO 115 I=l,NOV
IF (IB(I).GT.O) GO TO 115
JJ=O
DO 114 J=l,NOB
IF (Z (J,I» 114,114,113
113 JJ=1
114 CONTINUE
IF (JJ.EQ.O) IB(1)=3
115 CONTI NUE
NOVV=NOV-1
DO 150 I=1,NOVV
IF (IB(I) .NE.O) GO TO 150
12=1+1
DO 140 3=I2,NOV
IF (IB(I).NE.O.OR.IB(J).NE.O) GO TO 140
31=IIt (I)
J2=IK (J)
-209~

TH1=V(J1)/TlB(J1,I)
IF (TH1.NE.0.0) GO TO 118
IB (I) =3
GO TO 140
11A TH2=V(J2)/TAB(J2,J)
IF (TH2.NE.0.0) GO TO 119
IB (J) =3
GO TO 140
119 DO 120 K=1,NOB
120 VC(K)~Z(K.I)*TH1-Z(K,J)*TH2
IJ=O
IK=O
DO 125 K=1,NOB
IF (VC(K).GT.O.O) IJ=1
125 IF (VC(K) .LT.O.O) IK=1
IF (IJ.EQ.0.AND.IK.EQ.1) IB(I)=3
IF (IJ.EQ.1.lND.IK.EQ.0) IB(J)=3
140 CONTINUE
150 CONTINUE
DO 151 I=1,NOV
Il' (IB{I) .NE. 3) GO TO 151
CALL TEST2 (IM(1),I,2)
151 CONTINUE
155 DO 165 I=1,NOV
IF (lB{l) .GT.O) GO TO 165
CALL TEST2 (11'1(1) ,1,1)
165 CONTINUE
J1=0
DO 175 I=1,NOV
IF(J1.EQ.O)GO TO 170
Il' (NONDP. EQ. 0) GO TO 175
IF (IBCI) .GT.O) GO TO 175
CALL TEST3(nI(I) ,I)
GO TO 175
170 IF (IB (I) .GT. 0) GO TO 175
IROW=IM (I)
IC=I
J1=1
175 CONTINUE
DO 177 1=1, !lOV
177 Il' (IBCI).GT.1) IB(I)=O
IF (IROW.EQ.O.OR.IC.EQ.O) GO TO 210
176 CALL TEST2 (IROll,IC,O)
IF (IB(IC).EQ.5) GO TO 190
180 CALL REMOVE(IROW,IC)
185 CAT.L OBJVAL
CALL CHCOMP
CALL TEST1
186 CALL FEASBL (K)
IF (K.EQ.O.lND.NONDP.GT.O) GO TO 190
GO TO 86
190 IF(KAUX.EQ.O) GO TO 220
CALL FORBlS (KKU)
IF (KKU. EQ.1) GO TO 220
GO TO 185
195 BIG=.1E10
J1=0
DO 200 K=1,NOC
-210-
IF(TAB(K,IC).LE.O.O)GO TO 200
IF(V(K)/TAB(K,IC) .GE.BIG)GO TO 200
BIG=V(K,/TAB(K,IC)
J1=K
200 CONTINUE
IF(J1.NE.0)GO TO 205
WRITE(6,906)
906 FORHAT(SX,25H SOLUTION IS NOT FEASIBLE)
STOP
205 IROW=J1
CALL TEST2(IROW,IC,0)
IF(IB(IC).EQ.~)GO TO 190
GO TO 180
210 WRITE (6,1003) nOll,IC
1003 FORMAT (//SX,5HIROWt,IS,10X,3HICt,I5//)
220 IF (NONOP.EQ.O) GO TO 999
REWIND 9
WRITE (6,1011)
1011 FORMAT (1 H1)
DO 240 I=1,NONOP
READ (9,1005) (ID(.l),J=1,NOC)
1005 FORI'IAT (812)
READ (9,1006) (V(J) ,J=1,NOC)
1006 FORMAT (RE15.8)
READ (9,1006) (X (J) ,J=1,NOBZ)
WRITE (6,1007) .
1007 FOR!'!AT ( ///SX,SHBASIS,10X,5HVALUE)
DO 230 J=1,NOC
230 WRITE (6,1008) IO(J) ,V(J)
1008 FOP!'!AT (SX,lS,5X,E20.8)
WRITE (6,1009)
100<:J FORI'1AT C///5X,33HVALUES OF THE OBJECTIVE FUNCTIONS
241) WRITE (6,1010) (X (J) ,J= 1, NOBZ)
1010 FORMAT (SX,6E20.8)
<:J99 STOP
END

SUBROUTINE OBJVAL
C THIS SUBROUTINE COMPUTES THE VALUE OF EACH OBJ~CTIVE FUNCTION
COMMON NOV,NOC,NOB,KAUX,KUSE,NOBZ,NOV1,TAB(12,40),V{12),
1C(8,40} ,Z(8,40) ,X(8) ,ID(12) ,IB(40} ,IEO(12) ,T(12,40) ,INN(300,3),
2IAUX(200,3) ,FI'IT(12)
DO 20 I=1,NOBZ
SUI'I=O.O
DO 15 J=1,NOC
ID1=ID(J)
15 SU~=SUI'I.C(I,ID1)*V(J)
20 =SU!I
X (1)
RETURN
END
-211-
SUBROUTINE PRINIT
COMMON NOV,NOC,NOB,KAUX,KUSE,NOB~,NOV1,TAB(12,40),V(12),
lC{8,40) .Z(8,40) ,X(8) ,ID(12) ,IB(40) ,IEQ(12) ,T(12,40) ,INN(300,3),
2IAUI (200,3) , PitT (12)
WRITE (6,900)
900 PORMAT (115X,16H SIMPLEX TABLEAUII)
WRITE (6,901)
901 FORMAT (1X,6H BASIS,21,6H VALOE,7X,10H VARIABLESII)
DO 100 I=1,NOV1,1
IL=I+6
IF (IL. GT. NOV 1) I L=NOV 1
DO 90 J=1,NOC
90 WRITE (6,902) ID{J},V(J),(TAB(J,K),K=I,IL)
902 FORMAT (1X,I6,8E15.8)
WRITE (6,903)
903 FORUT (1X)
100 CONTINUE
WRITE (6,904)
904 PORMAT (1115X,20H OBJECTIVE FUNCTIONS II)
WRITE (6,905)
905 FORMAT (1I,7HOBJ. F.,2X,6H VALUE,1X,10H VARIABLES II)
DO 200 I=1,NOV1,7
IL=I+6
IF (IL.GT.NOV1) IL=NOVl
DO 190 J=l,NOB
190 WRITE (6,902) J,X(J), (Z(J,K) ,K=I,IL)
WRITE (6,903)
200 CONTINUE
WRITE (6,906)
906 FORMAT (/5X,19H COMPOSITE FU~CTION)
DO 210 I=l,NOV1,7
IL=I+6
IF (IL.GT.NOV1) IL=NOV1
210 WRITE (6,902) NOBZ,X(NOBZ), (Z(NOBZ,K) ,K=I,IL)
WRITE (6,901) KAUX
901 FORMAT (5X,5HKAUXI,I8)
RETURN
END

SUBROUTINE CHCOMP
C THIS SUBROUTINE COMPUTES TRE VALOE OF EACH ELEMENT OF THE
C OBJECTIVE FUNCTION ARRAY
COKMON NOV,NOC,NOB,KAUX,KUSE,NOB~,NOV1,TAB(12,40),V(12),
1C(8,40) ,7.(8,40) ,X(8) ,ID(12) ,IB(40) ,tEQ(12) ,T(12,40) ,INN(300,3),
2IAUX(200,3) ,FMT(12)
DO 20 I=l,NOBZ
DO 20 J=l,NOVl
5UK=0.0
DO 15 K=l,NOC
ID1=ID (K)
15 SUM=SUM+TAB(K,J)*C{I,ID1)
20 Z(I,J)=SUM-C{I,J)
RETURN
END
-212-
SUBROUTINE REMOVE(IROW,ICCL)
THIS SUBROUTINE TRANSFORMS THE TABLEAUS AND ENTERS THE ICOL
ELEMENT IN PLACE OF THE IROW EtE~ENT IN THE BASIS
COMMON NOV,NOC,NOB,KAUX,KUSE,NOBZ,NOV1,TAB(12,40),V(12),
1 C (8 ,40) , Z (8 ,40) , X (a" 10 (12) , I B (40) , TEQ (12) , T (12,40) , IN N (300, 3) ,
2IAUX(200,3) ,Fl1T (12)
01 = TAB (IROW, ICOL)
DO 10 I=1,NOV1
10 TAB(IROW,I) =TAB{IROW,I) 101
V(IROW)=V(IROW)/DI
DO 20 I=1,NOC
IF (I.EQ.IROW) GO TO 20
V(I)=V(I)-V(IROW)*TAB(I,ICOL)
IF (V(II.LT.0.000001) V(I)=O.o
20 CONTINUE
DO 30 I=1,NOC
IF(I.EQ.IROW)GO TO 30
DO 25 J=1,NOV1
IF(J.EQ.ICOL)GO TO 25
TAB (I,J) =T.1B (I,J) -TAB (I,ICCL) *TAB (IROW ,J)
25 CONTINUE
30 CONTINUE
DO 40 I=1,NOC
IF (I.EQ.TROW) GO TO 40
TAB (I,ICOL)=O.O
40 CONTINUE
DO 45 1=1,8
DO 45 J=1,40
45 Z(I,J)=O.O
RETURN
END

SUBROUTINE DROP1 (KU)


THIS SUBROUTINE DROPS THE KU BASIS FROM THE ADXILLARY LIST
COMMON NOV,NOC,NOB,KAUX,KUSE,NOBZ,NOV1,TAB(12,40),V(12).
1C(8,40) ,Z(8,40) ,X(B) ,ID(12) ,IB(40) ,1EQ(12) ,T(12,40) ,1NN(300,3),
2H.DX(200,3) ,FMT(12)
IK1=KU
IK2=KAUX-1
IF (IK1.GT.1K2) GO TO 25
DO 20 1=IK1,IK2
I1=I+1
1AUX(I,1)=IAUX(I1,1)
20IIUX(I,2)=1IUX(I1,2)
25IAUX(KAUX,1)=0
I A HX (K A U X, 2) = 0
KAU1{=KAUX-1
RETURN
END
-213-

SUBROUTINE TEST1 .
THIS SUBROUTINE TESTS TO SEE IF ANY 3BJ. FUNCTION IS AT ITS
KAXI~UM AND IF IT IS , TESTS TO SEE IF ANY ALTERNATIVE KAXIMA
IMPROVES ANOTHER FUNCTION
COMMON NOV,NOC,NOB,KAUX,KUSE,NOBZ,NOV1,TAB(12,40),V(12),
1C(8,40) ,Z(8,40I,X(8) ,ID(12) ,IB(40) ,IEQ(12) ,T(12,40) ,INN(300,3),
2IAUX(200,3) ,nT(12)
DO 100 I=1,NOBZ
I'lAX=O
DO 10 J=1,NOV1
IF (IB(J}.EQ.1) GO TO 10
IF (lUX. LT. 0) GO TO 10
IF (Z(I,J}) 10,7,8
8 MAX=-1
GO TO 10
7 MAX=1
10 CONTINUE
IF (lUX) 100,11,11
11 WRITE (6,906) I
906 FORKAT (1ISX,4H THE,I4,3X,3SH OBJECTIVE FUNC~ION IS AT A MINIMUM)
I F (II AX • EQ. 1) GO TO 12
WRITE (6, 907)
907 FORMAT ( SX,32H THERE IS NO ALTERN'TIVE MINIMUM II)
GO TO 50
12 WRITE (6,908)
908 FORMAT ( SX,29H THERE ARE ALTERNATIVE MINIMAll)
DO 30 J=1,NOV1
IF (IB(J).EQ.1) GO TO 30
IF (Z(I,J)) 30,13,30
13 DO 25 K=1,NOBZ
IF (Z(K,J)) 25,25,14
14 MAX=2
909 FORMAT (11SX,14H OBJ. FONCTION,I4,2X,12H IS IMPROVED II)
25 CONTINUE
30 CONTINUE
IF (I'IAX.EQ.2) GO TO 100
50 CONTINUE
910 FORMAT (1ISX,33H BUT NO OBJ. FUNCTION IS IKPROVED II)
100 CONTINUE
RETURN
END
-214-

SUBROUTINE 'rEST2 (IROW,ICOL,IK)


THIS SUBROUTINE TESTS TO SEE IF THE PROPOSED BASIS HAS BEEN
USED PRF.VIOUSLY
COMMON NOV, NOC,NOB, KAUX, KU5E, NOBZ, NOV1 ,TAB (12,40) , V (12) ,
1 C( A, 40) , Z (8,40) , X (8) , ID ( 1 2) , I B (40) , TEO (12) , T (12,40) , IN N (300,3) ,
2IAUX(200,3) ,FMT(12)
DIMENSION IT(12),IS(12)
IF (lK.NE.O) GO TO 101
DO 10 I=1,NOV
10 IF (IB(1).EQ.5) 1B(I)=0
101 CONTINUE
DO 11 I=1,NOC
IT (I) =ID (I)
11 IS (I) =0
IT(IROW)=ICOL
DO 30 I=1,NOC
MIN=100
DO 25 J=1,NOC
IF (IT(.J).LE.O.OR.IT(J).GE.MIN) GO TO 25
IJ=J
MIN=IT (J)
25 CONTINUE
IS(I)=MIN
IT(IJ)=O
30 CONTINUE
I1=0
12=0
DO 40 I=1,NOC
IF (I-4) 35,35,36
35I1=I1+IS(I)*10**(2*(4-I»
GO TO 40
36 12=12+15(1)*10**(2*(8-1)
40 CONTINUE
IF (KUSE. EQ. 0) GO TO 60
IJ=O
DO 50 I=1,KUSE
IF (I1.NE.INN(I,1).OR.I2.NE.INN{I,2» GO TO 50
IJ=1
50 CONTINUE
IF (IJ.EQ.O) GO TO 60
IB (I COL) =5
GO TO 99
60 IF (IK.NE.O) GO TO 98
KUSE=KUSE+1
INN(KUSE,1)=I1
INN (KUSE, 2) =12
IB (ICOL) =1
11=1 D (IROW)
IB(I1)=0
ID (IROW) =ICOL
GO TO 99
98 IF (TK. EQ.1) GO TO 99
KlJSE=KUSE+1
INN(KUSE,1)=I1
INN(KUSE,2)=I2
99 RETURN
END
-215-
SUBROUTINE TEST3 (IROW,ICeL)
THIS SUBROUTINE TESTS TO SEE IF A BASIS RAS BEEN STORED FOR
FUTURE USE AND STORES IT IF IT HAS NOT BEEN STORED PREVIOUSLY
COMMON NOV,NOC,NOB,KAUX,KUSE,NOBZ,NOV1,TAB(12,40},V(12),
'1C(8,40) ,Z(B,40) ,X(8) ,ID(12) ,IB(40) ,IEQ(12) ,T(12,40) ,INN(300,3),
2IAUX (200, 3) ,UT (12)
DIMENSION IT (12) , IS (12)
DO 10 I=1,NOC
IT (I) =ID (I)
10 15(1)=0
IT(IROIl)=ICOL
DO 30 I=1,NOC
!UN=100
DO 25 J=1,NOC
IF (IT{J) .LE.O.OR.IT(J).GE.Mt~ GO TO 25
IJ=J
IUN"=IT (J)
25 CONTINUE
IS (I) =MIN
IT(IJ)=O
30 CONTINUE
11=0
,12=0
DO 40 I=1,NOC
IF (1-4) 35,35,36
35 11=11+15(1)*10**(2*(4-1»
GO TO 40 '
36 I2=I2+IS(I) *10**(2*(8-1»
40 CONTINUE
1J=O
IF(KAUX.EQ.O) GO TO 51
DO 50 I=1,KAUX
IF (I1.NE.IAUX(I,1).OR.I2.NE.IAUX(I,2» GO TO 50
IJ=1
50 CONTINUE
51 IF (IJ.NE.O) GO TO 99
KAUX=KAUX+1
IAUX (KAUX,1) =11
1AUX(KAUX,2)=I2
99 RETURN
END
-216-

SUBROUTINE TEST4 (KU, II)


THIS SUBROUTINE TESTS TO SEE IF A CHOSEN AUXILLARY BASIS
HAS BEEN USED PREVIOUSLY
COMMON NOV,NOC,NOB,KAUX,KUSE.NOBZ,NOV1,TAB(12,40),V(12),
1C(8,40) ,Z(8,40) ,X(8) ,10(12) ,IB(40) ,IEQ(12) ,T(12.40) ,INN(301),3},
2IAUX(200,3) ,FMT(12)
IA1=IAUX(KU,1)
IA2=IAUX(KU,2)
IA1=IAUX (KU, 3)
II=O
DO 20 I=1,KUSE
IF (II. EQ. 1) GO TO 20
IF (IA 1. EQ. INN (I, 1) • AND. IA2. EQ. INN (1,2» II=1
o CONTINUE
RETURN
END

SUBROUTINE R"EMOV1 (IR,ICO)


THIS SUBROUTINE SOLVE~ THE SUBPROBLEM TABLEAU
COMMON NOV,NOC,NOB,KAUX,KUSE,NOBZ,NOV1,TAB(12,40),V(12),
1C(8,40) ,Z(8,40) ,X(8) ,ID{12) ,IB(40) ,IEQ(12) ,T(12,40) ,INN{300,3),
2IAUX(200,3) ,FMT(12)
COMMON/SUBPRO/ ATAB (8,30) ,CI (30) , IC (8) ,IG (30) ,ZI (30) ,IE (30) , NV
DO 10 I=1,NV
IF (I.EQ.ICO) GO TO 10
ATAB(IR,I)=ATAB(IR.I)/ATAB(IR.ICO)
10 CONTINUE
ATAB(IR,ICO)=1.0
DO 30 I=1,NOB
IF (I.EQ.IR) GO TO 30
DO 25 J=1,NV
IF (J.EQ.ICO) GO TO 25
ATAB(I,J)=ATAB{I,J)-ATAB(IR,J) *ATAB(I,ICO)
25 CONTINUE
30 CONTINUE
DO 40 I=1,NOB
40 IF (LNE.IR) ATAB(I,ICO) 0.0
I=IC (IR)
IC (IR) =ICO
IG (I) =0
IG (ICO) = 1
DO 50 I=1,NV
SU!'I=O.O
DO 45 J=1,NOB
K=IC (J)
45SUM=SUM+CI(K)*ATAB(J,I)
50 ZI(I}=SUM-CI(I)
99 RETURN
END
-217-
SUBROUTINE FORB1S (KKO)
THIS SUBROUTINE SELECTS A BASIS THAT HAS BEEN STORED EARLIER
AND IS CLOSER TO THE CUFRENT BASIS THAN ANY OTHER BASIS STORED
AND FORCES THAT BASIS INTO THE SOLUTION
COftftON NOV,NOC,NOB,KAUX,KUSE,NOBZ,NOV1,TAB(12,QO),V(12),
1C (8,40) , Z (8, QO) , X(8) , ID (12) , IB (40) , IEQ (12) , T (12,40) , INN (300,3) ,
2IAUX(200,3),FMT(12)
COMMON /LOCAL/ TKA(12),IOUT(12),NOER,INA(12),IADD(200)
DO 30 I=1,KAOX
lADD(I)=1
CALL TESTQ (I,ll)
30 IF (II.EQ.1) CALL DROP1(I)
IF (KAUX.EQ.O) GO TO 98
5 KKU=O
CALL CHOS1 (KU)
IF (KU.EQ.O) GO TO 98
IADD(KU)=O
10 IKK=O
DO 25 11=1, NOER
IK=O
DO 20 I=1,NOER
IF (IOUT(I).EQ.O) GO TO 20
IF (IK.NE.O) GO TO 20
IC=IOUT (I)
IR=O
DO 19 J=1,NOER
IF (IR.NE.O) GO TO 19
IF (INA (J) • EO. 0) GO TO 19
JJ=O
J1=INA(J)
AMIN=100000
DO 18 K=1,NOC
H (TAB(K,J1) .LT.O.O) GO TO 19
IF(TAB{K,J1).GT.O.0)GO TO 16
VV=O
GO TO 17
16 VV=V(K)/TAB(K,J1)
IF (VV.GT.AfHN) GO TO 18
IF (VV.EQ.AMIN.AND.ID(K).NE.IC) GO TO 18
17 JJ=K
AMIN=VV
18 CONTINUE
IF (ID{JJ).NE.IC) GO TO 19
IR=J1
INA(J)=O
IK=1
IROW=JJ
ICOL=IR
19 CONTINUE
IF (IK. EQ.O) GO TO 20
lOUT (I) =0
CALL REMOVE (IROW,rCOl)
ID1=ID (IROW)
IB(ID1}=0
IB (ICOL) =1
ID(IROW)=ICOL
IKK=IKK+1
CALL OBJVAL
-218-
CALL CHCOIIP
~O CONTINUE
!5 CONTINUE
IF (IKK.EQ.NOER) GO TO 91
WRITE (6,92)
12 FORIIAT (1/5X,23HBASIS CANNOT BE ENTERED
GO TO 5
17 CALL DROP1 (KU)
GO TO 99
18 KKIJ=1
19 RETURN
END

SUBROUTINE CHOS1 (KU)


THIS SUBROUTINE CHOOSES THE AUXILLARY BASIS THAT IS CLOSEST
TO THE CIJRRENT BASIS
COMMON NOV,NOC,NOB,KAUX,KUSE,NOBZ,NOV1,TAB(12,qO),V(12),
1C(8,qO) ,Z(8,QO) ,X(8) ,ID(12) ,IB(QO) ,IEQ(12) ,T(12,QO) ,INN(300,3),
2IAUX (200, 3) ,PitT (12)
CCMMON ILOCAL/ IKA(12),IOUT(12),NOER,INA(12) ,IADD(200)
DIMENSION IOU(QO) ,IN(40)
DO 5 1=1,12
!KA (I) =0
IOUT{I) =0
5 INA(I)=O
M1N=9
K!J=O
DO 100 1I=1,KAUX
II1=II-1
r1=KAUX-II1
I2=IAUX(I1,1)
I3=IAUX(I1,2)
IF (IADD(I1).EQ.0) GO TO 100
DO 20 I=1,NOC
IF (1-4) 10,10,15
10 IQ= 12/(10**(2*(4-1»)
IKA(1)=I4
12=I2-14*10**{2*(Q-I»
GO TO 20
15 14=13/10**(2*(8-1»
IKA (I) =!ij
13=I3-IQ*10**(2*(8-I»
20 CONTINUE
10=0
1N1=0
DO 30 1=1,NOC
1J=O
DO 25 J= 1, NOC
IF (ID (I) • NE. 1KA (J» GO TO 25
IJ=1
25 CONTINUE
IF (IJ. EQ.1) GO TO 30
10=10+1
IOU (10) =1D (I)
-219-
30 CONTINUE
DO /f0 I=1,NOC
IJ=O
DO 35 J=1,NOC
35IP (IKA(I).EQ.ID(J» IJ=1
IP (IJ.EQ.1) GO TO/fO
IN1=IN1+1
IN (IN 1) =IKA (I)
/f0 CONTINUE
IF (IO.EQ.O) GO TO 100
IF (IO.GE.MIN) GO TO 100
lHN=IO
KU=I1
DO 50 1:1,10
lOUT (I) =IOU (I)
50 INA (I) =IN (I)
100 CONTINUE
IF (MIN.EQ.9) GO TO 99
NOER=MIN
IF (KU.EQ.O) GO TO 99
KUSE = KUSE+1
INN(KUSE,1)=IAUX(KU,1)
INN(KUSE,2)=IAUX(KU,2)
WRITE (6,90) IAUX(Kij,1)~IAUX(KU,2)
90 FORMAT (5X, 9HAUXILLARY ,5I,3!8)
99 RETURN
END

SUBROUTINE FEASBL (K)


CO~MON NOV,NOC,NOB,KAUI,KUSE,NOBZ,NOV1,TAB(12,40),V(12),
1 C (8,40) , Z (8,40) , X (8) , ID (1 2) , I B (40) , I1!Q (12) , T (12, /f0) , IN NnO 0, 3) ,
2tAUX (200,3) ,PMT (12)
COMMON/SUBPRO/ AT1\B (8,30) ,CI (30) , Ie (8) ,IG (30) ,ZI (30) ,IE (30), NV
COMMON/NDOMP/ NONDP
K=O
DO 5 1=1,8
IC (J) =0
DO 5 .1=1,30
CI(J)=O
IG (J) =0
ZI(J)=O
IE{J) =0
5 ATAB(I,J)=O.O
NC=NOB
NV:O
DO 20 I=1,NOV
IF (IB (I) • EQ. 1) GO TO 20
NV:NV+1
ZI(NV)=-Z(NOBZ,I)
TE(NV)=I
DO 10 J=1,NOB
10 ATAB(J,NV):-Z(J,I)
20 CONTINUE
NV1=NV
DO 30 I:1,NOB
NV=NV+1
-220-

ATAB(I,NV)=1.0
IC(I)=NV
IG(NV)=1
30 CI(NV)=1.0
31 II=O
AMAX=O
DO 35 I=1,NV
IF (ZI(I).GE.AMAX) GO TO 35
II=I
AMAX=ZI (I)
35 CONTINUE
IF (II.NE.O) GO TO 36
C THE BASIS IS NCN-DOMINATED IP lItO
GO TO 70
36 CONTINUE
III=O
A!'IIN=.1E10
DO 50 J=1,NOB
IF (ATAB(J,II) .LE.O.O.OR.ATAB(J.II).GE.AMIN) GO TO 50
III=J .'
AMIN=ATAB (J, II)
~O CONTINUE .
C THE BASIS IS DOMINATED IF 11110
IF (III.EQ.O) GO TO 51
CALL REMOV1 (III, II)
GO TO 31
51 CONTINUE
DO 60 1=1, NOC
IF (TAB(I,II).LE.O.O) GO TO 60
III=-1
60 CONTINUE
IF (III) 61,62.62
61 WRITE (6.901)
301 FORMAT ( SX,2QH SUBPROBLEM IS UNBOUNDEDII)
GO TO 99
62 WRITE (6,902)
902 FORHAT (115X,30H ORIGINAL PROBLEM IS UNBOUNDED II)
GO TO 99
70 NONDP=NONDP+1
CALL PRINIT
K=1
WRITE (9,911) (ID(J) ,J=1,NOC)
911 FORMAT (812)
WRITE (9,912) (V (.1) ,J=1, MOC)
912 FORHAT (8E1S.8)
WRITE (9,912) (X(J),J=1,NOBZ)
99 RETURN
END
Vol. 59: J. A Hanson, Growth in Open Economics. IV, 127 pages. Vol. 86: Symposium on the Theory of Scheduling and Its Ap-
4°.1971. OM 16,- plications. Edited by S. E. Elmaghraby. VIII, 437 pages. 4°. 1973.
Vol. 60: H. Hauptmann, Schiltz· und Kontrolltheorie in stetigen OM 32,-
dynamischen Wirtschaftsmodellen. V, 104 Seiten. 4°. 1971. Vol. 87: G. F. Newell, Approximate Stochastic Behavior of n-Server
OM 16,- Service Systems with Large n. VIII, 118 pages. 4°. 1973. OM 16,-
Vol. 61 : K. H. F. Meyer, Wartesysteme mit variabler Bearbeitungs· Vol. 88: H. Steckhan, GOterstrllme in Netzen. VII, 134 Seiten. 4°.
rate. VII, 314 Seiten. 4°.1971. OM 24,- 1973. OM 16,-
Vol. 62: W. Krelle u. G. Gabisch unter Mitarbeit von J. Burger- Vol. 89: J. P. Wallace and A. Sherret, Estimation of Product
meister, Wachstumstheorie. VII, 223 Seiten. 4°. 1972. OM 20,- Attributes and Their Importances. V, 94 pages. 4°. 1973. OM 16,-
Vol. 63: J. Kohlas, Monte Carlo Simulation im Operations Re- Vol. 90: J.-F. Richard, Posterior and Predictive Densities for
search. VI,162 Seiten. 4°.1972. OM 16,- Simultaneous Equation Models. VI, 226 pages. 4°. 1973. OM 20,-
Vol. 64: P. Gessner u. K. Spremann, Optimierung in Funktionen- Vol. 91: Th. Marschak and R. Selten, General Equilibrium with
riiumen.IV, 120 Seiten. 4°. 1972. OM 16,- Price-Making Firms. XI, 246 pages. 4°.1974. OM 22,-
Vol. 65: W. Everling, Exercises in Computer Systams Analysis. Vol. 92: E. Dierker, Topological Methods in Walrasian Economics.
VIII, 184 pages. 4°. 1972. OM 18,- IV, 130 pages. 4°. 1974. OM 16,-
Vol. 66: F. Bauer, P. Garabedian and D. Korn, Supercritical Wing Vol. 93: 4th IFAC/IFIP International Conference on Digital Com-
Sections. V, 211 pages. 4°.1972. OM 20,- puter Applications to Process Control, ZUrich/Switzerland, March
Vol. 67: I. V. Girsanov, Lectures on Mathematical Theory of 19-22, 1974. Edited by M. Mansour and W. Schaufelberger.
Extremum Problems. V, 136 pages. 4°. 1972. OM 16,- XVIII, 544 pages. 4°. 1974. OM 36,-

Vol. 68: J. Loeckx, Computability and Decidability. An Introduction Vol. 94: 4th IFAC/IFlP International Conference on Digital Com-
for Students of Computer Science. VI, 76 pages. 4°. 1972. OM 16,- puterApplications to Process Control, ZUrich/Switzerland, March
19-22, 1974. Edited by M. Mansour and W. Schaufelberger.
Vol. 69: S. Ashour, Sequencing Theory. V, 133 pages. 4°. 1972. XVIII, 546 pages. 4°. 1974. OM 36,-
OM 16,-
Vol. 95: M. Zeleny, linear Multiobjective Programming. XII, 220
Vol. 70: J. P. Brown, The Economic Effects of Floods. Investiga- pages. 4°. 1974. OM 20,-
tions of a Stochastic Model of Rational Investment Behavior in the
Face of Floods. V, 87 pages. 4°. 1972. OM 16,-
Vol. 71: R. Henn und O. Opitz, Konsum- und Produktionsthllorie II.
V, 134 Seiten. 4°.1972. OM 16,-
Vol. 72: T. P. Bagchi and J. G. C. Templeton, Numerical Methods in
Markov Chains and Bulk Queues. XI, 89 pages. 4°.1972. OM 16,-
Vol. 73: H. Kiendl, Suboptimale Regier mit abschnittweise linearer
Struktur. VI, 146 Seiten. 4°.1972. OM 16,-
Vol. 74: F. Pokropp, Aggregation von Produktionsfunktionen. VI,
107 Seiten. 4°. 1972. OM 16,-
Vol. 75: GI-Gesellschaft fUr Informatik _. V. Bericht Nr. 3. 1. Fach-
tagung Uber Programmiersprachen· MUnchen, 9-11. MlIrz 1971.
Herausgegeben im Auftag der Gesellschaft fUr Informatik von H.
Langmaack und M. Paul. VII, 280 Seiten. 4°. 1972. OM 24,-
Vol. 76: G. Fandel, Optimale Entscheidung bei mehrfacher Ziel-
setzung. 121 Seilen. 4°. 1972. OM 16,-
Vol. 77: A Auslender, Problemes de Minimax via l'Analyse Con-
vexe et les Inegalites Variationnelles: Theorie et Algorithmes. VII,
132 pages. 4°.1972. OM 16,-
Vol. 78: GI-Gesellschaft fOr Informatik e. V. 2. Jahrestagung, Karls-
ruhe, 2.-4. Oktober 1972. Herausgegeben im Auftrag der Gesell-
schaft fUr Informatik von P. Deussen. XI, 576 Seiten. 4°. 1973.
OM 36,-
Vol. 79: A Berman, Cones, Matrices and Mathematical Program-
ming. V, 96 pages. 4°.1973. OM 16,-

Vol. 80: International Seminar on Trends in Mathematical Model-


ling, Venice, 13-18 December 1971. Edited by N. Hawkes. VI,
288 pages. 4°.1973. OM 24,-

Vol. 81: Advanced Course on Software Engineering. Edited by


F. L Bauer. XII, 545 pages. 4°. 1973. OM 32,-
Vol. 82: R. Saeks, Resolution Space, Operators and Systems. X,
267 pages. 4°. 1973. OM 22,-
Vol. 83: NTG/GI-Gesellschaft fUr Informatik, Nachrichtentech-
nische Gesellschaft. Fachtagung ,Cognitive Verfahren und Sy-
steme", Hamburg, 11.-13. April 1973. Herausgegeben im Auftrag
der NTG/GI von Th. Einsele, W. Giloi und H.-H. Nagel. VIII, 373
Seiten. 4°. 1973. OM 28,-
Vol. 84: A. V. Balakrishnan, Stochastic Differential Systems I.
Filtering and Control. A Function Space Approach. V, 252 pages.
4°. 1973. OM 22,-
Vol. 85: T. Page, Economics of Involuntary Transfers: A Unified
Approach to Pollution and Congestion Externalities. XI, 159 pages.
4°.1973. OM 18,-
Okonometrie und Unternehmensforschung
Econometrics and Operations Research
Vol. I Nichtlineare Programmierung. Von H. P. KOnzi und W. Krelle unter
Mitwirkung von W. Oettli. - Mit 18 Abbildungen. XV, 221 Seiten.
1962. Geb. DM 38,-
Vol. II Lineare Programmierung und Erweiterungen. Von G. B. Dantzig. Ins
Deutsche Obertragen und bearbeitet von A. Jaeger. - Mit 103 Ab-
bildungen. XVI, 712 Seiten. 1966. Geb. DM 68,- .
Vol. III Stochastic Processes. By M. Girault. - With 35 figures. XII, 126
pages. 1966. Cloth DM 28,-
Vol. IV Methoden der Unternehmensforschung im Versicherungswesen. Von
K.-H. Wolff. - Mit 14 Diagrammen. VIII, 266 Seiten. 1966. Geb.
DM 49,-
Vol. V The Theory of Max-Min and its Application to Weapons Allocation
Problems. By John M. Danskin. - With 6 figures. X, 126 pages. 1967.
Cloth DM 32,-
Vol. VI Entscheidungskriterien bei Risiko. Von H. Schneeweiss. - Mit 35
Abbildungen. XII, 214 Seiten. 1967. Geb. DM 48,-
Vol. VII Boolean Methods in Operations Research and Related Areas. By P.
L. Hammer (lvanescu) and S. Rudeanu. With a preface by R. Bellman. -
With 25 figures. XVI, 329 pages. 1968. Cloth DM 46,-
Vol. VIII Strategy for R&D: Studies in the Microeconomics of Development.
By Th. Marschak, Th. K. Glennan JR., and R. Summers. - With 44
figures. XIV, 330 pages. 1967. Cloth DM 56,80
Vol. IX Dynamic Programming of Economic Decisions. By M. J. Beckmann. -
With 9 figures XII, 143 pages. 1968. Cloth DM 28,-
Vol. X Input-Output-Analyse. Von J. Schumann. - Mit 12 Abbildungen. X,
311 Seiten. 1968. Geb. DM 58,-
Vol. XI Produktionstheorie. Von W. Wittmann. - Mit 54 Abbildungen. VIII,
177 Seiten. 1968. Geb. DM 42,-
Vol. XII Sensivitatsanalysen und parametrische Programmierung. Von W. Din-
kelbach. - Mit 20 Abbildungen. XI, 190 Seiten. 1969. Geb. DM 48,-
Vol. XIII Graphentheoretische Methoden und ihre Anwendungen. Von W.
Knodel. - Mit 24 Abbildungen. VIII, 111 Seiten. 1969. Geb. DM 38,-
Vol. XIV Praktische Studien zur Unternehmensforschung. Von E. Nievergelt,
O. MOiler, F. E. Schlaepfer und W. H. Landis. - Mit 82 Abbildungen.
XII, 240 Seiten. Geb. DM 58,- -
Vol. XV Optimale Reihenfolgen. Von H. MOlier-Merbach. - Mit 43 Abbildungen.
IX, 225 Seiten. 1970. Geb. DM 60,-
Vol. XVI Preispolitik der Mehrproduktenunternehmung in der statischen Theo-
rie. Von R. Selten. - Mit 20 Abbildungen. VIII, 195 Seiten. 1970. Geb.
DM 64,-
Vol. XVII Information Theory for Systems Engineers. By L. P. Hyvarinen. - With
42 figures. VIII, 197 pages. 1970. Cloth DM 44,-
Vol. XVIII Unternehmensforschung im Bergbau. Von F. L. Wilke. - Mit 29 Ab-
bildungen. VIII, 150 Seiten. 1972. Geb. DM 54,-