Sie sind auf Seite 1von 579

Lecture Notes in Economics and Mathematical Systems

(Vol. 1-15: Lecture Notes in Operations Research and Mathematical Economics, Vol. 16-59: Lecture
Notes in Operations Research and Mathematical Systems)
Vol. 1: H. BOhlmann, H. Loeffel, E. Nievergelt, EinfOhrung in die Vol. 30: H. Noltemeier, Sensitivitatsanalyse bei diskreten linearen
Theorie und Praxis der Entscheidung bei Unsicherheit. 2. Auflage, Optimierungsproblemen. VI, 102 Seiten. 1970.
IV, 125 Seiten. 1969.
Vol. 31: M. KOhlmeyer, Die nichtzentrale t-Verteilung. II, 106 Sei-
Vol. 2: U. N. Bhat, A Study of the Queueing Systems M/G/l and ten. 1970.
GI/M/1. VIII, 78 pages. 1968.
Vol. 32: F. Bartholomes und G. Hotz, Homomorphismen und Re-
Vol. 3: A Strauss, An Introduction to Optimal Control Theory. duktionen linearer Sprachen. XII, 143 Seiten. 1970. DM 18,-
Out of print Vol. 33: K. Hinderer, Foundations of Non-stationary Dynamic Pro-
Vol. 4: Branch and Bound: Eine EinfOhrung. 2., geiinderteAuflage. gramming with Discrete Time Parameter. VI, 160 pages. 1970.
Herausgegeben von F. Weinberg. VII, 174 Seiten. 1973.
Vol. 34: H. Stormer, Semi:Markoff-Prozesse mit endlich vielen
Vol. 5: L. P. Hyviirinen, Information Theory for Systems Engineers. Zustanden. Theorie und Anwendungen. VII, 128 Seiten. 1970.
VII, 205 pages. 1968.
Vol. 35: F. Ferschl, Markovketten. VI, 168 Seiten. 1970.
Vol. 6: H. p, KOnzi, O. MOiler, E. Nievergelt, EinfUhrungskursus in
die dynamische Programmierung. IV, 103 Seiten. 1968. Vol. 36: M. J. p, Magill, On a General Economic Theory of Motion.
VI, 95 pages. 1970.
Vol. 7: W. Popp, EinfUhrung in die Theorie der Lagerhaltung. VI,
173 Seiten. 1968. Vol. 37: H. MOller-Merbach, On Round-Off Errors in Linear Pro-
gramming. V,48 pages. 1970.
Vol. 8: J. Teghem, J. Loris-Teghem, J. P. Lambotte, Modeles
d'Attente M/GI1 et GI/MI1 II Arrivees et Services en Groupes. III, Vol. 38: Statistische Methoden I. Herau\,gegeben von E. Walter.
53 pages. 1969. VIII, 338 Seiten. 1970.
Vol. 9: E. Schultze, EinfUhrung in die mathematischen Grundlagen Vol. 39: Statistische Methoden II. Herausgegeben von E. Walter.
der Informationstheorie. VI, 116 Seiten. 1969. IV, 157 Seiten. 1970.
Vol. 10: D. Hochstiidter, Stochastische Lagerhaltungsmodelle. VI, Vol. 40: H. Drygas, The Coordinate-Free Approach to Gauss-
269 Seiten. 1969. Markov Estimation. VIII, 113 pages. 1970.
Vol. 11/12: Mathematical Systems Theory and Economics. Edited Vol. 41: U. Ueing, Zwei Losungsmethoden fOr nichtkonvexe Pro-
by H. W. Kuhn and G. P. Szego. VIII, III, 486 pages. 1969. grammierungsprobleme. IV, 92 Seiten. 1971.

Vol. 13: Heuristische Planungsmethoden. Herausgegeben von Vol. 42: A V. Balakrishnan, Introduction to Optimization Theory in
F. Weinberg und C. A Zehnder. II, 93 Seiten. 1969. a Hilbert Space. IV, 153 pages. 1971.
Vol. 14: Computing Methods in Optimization Problems. V, 191 pages. Vol. 43: J.A Morales, Bayesian Full Information Structural Analy-
1969. sis. VI, 154 pages. 1971.
Vol. 15: Economic Models, Estimation and Risk Programming: Vol. 44:· G. Feichtinger, Stochastische Madelle demographischer
Essays in Honor of Gerhard Tintner. Edited by K. A Fox, G. V. L. Prozesse. IX, 404 Seiten. 1971.
Narasimham and J. K. Sengupta. VIII, 461 pages. 1969.
Vol. 45: K. Wendler, Hauptaustauschschritte (Principal Pivoting).
Vol. 16: H. P. KOnzi und W. Oettli, Nichtlineare Optimierung: II, 64 Seiten. 1971.
Neuere Verfahren, Bibliographie. IV, 180 Seiten. 1969.
Vol. 46: C. Boucher, LeQons sur la theorie des automates ma-
Vol. 17: H. Bauer und K. Neumann, Berechnung optimaler Steue- thematiques. VIII, 193 pages. 1971.
rungen, Maximumprinzip und dynamische Optimierung. VIII, 188
Vol. 47: H. A ,Nour Eldin, Optimierung linearer Regelsysteme
Seiten. 1969.
mit quadratischer Zielfunktion. VIII, 163 Seiten. 1971.
Vol. 18: M. Wolff, Optimale Instandhaltungspolitiken in einfachen
Systemen, V, 143 Seiten. 1970. Vol. 48: M. Constam, FORTRAN fOr Anfanger. 2. Auflage. VI,
148 Seiten. 1973.
Vol. 19: L. P. Hyvarinen, Mathematical Modeling for Industrial Pro-
Vol. 49: Ch. SchneeweiB, Regelungstechnische stochastische
cesses, VI, 122 pages. 1970.
Optimierungsverfahren. XI, 254 Seiten. 1971,
Vol. 20: G. Uebe, Optimale Fahrplane. IX, 161 Seiten. 1970.
Vol. 50: Unternehmensforschung Heute - Ubersichtsvortrage der
Vol. 21: Th. M. Liebling, Graphentheorie in Planungs- und Touren- ZOricher Tagung von SVOR und DGU, September 1970. Heraus-
problemen am Beispiel des stadtischen StraBendienstes. IX, gegeben von M. Beckmann. IV, 133 Seiten. 1971.
118 Seiten. 1970.
Vol. 51: Digitale Simulation. Herausgegeben von K. Bauknecht
Vol. 22: W. Eichhorn, Theorie der homogenen Produktionsfunk-
und W. Nef. IV, 207 Seiten. 1971.
tion. VIII, 119 Seiten. 1970,
Vol. 52: Invariant Imbedding. Proceedings 1970. Edited by R. E.
Vol. 23: A Ghosal, Some Aspects of Queueing and Storage
Bellman and E. D. Denman. IV, 148 pages. 1971.
Systems. IV, 93 pages. 1970.
Vol. 24: G. Feichtinger, Lernprozesse in stochastischen Automaten. Vol. 53: J. RosenmOller, Kooperative Spiele und Markte. III, 152
V, 66 Seiten. 1970. Seiten.1971.

Vol. 25: R. Henn und O. Opitz, Konsum- und Produktionstheorie I. Vol. 54: C. C. von Weizsacker, Steady State Capital Theory. III,
II, 124 Seiten. 1970. 102 pages. 1971.

Vol. 26: D. Hochstadter und G. Uebe, Okonometrische Methoden. Vol. 55: P. A V. B. Swamy, Statistical Inference iQ Random Coef-
XII, 250 Seiten. 1970. ficient Regression Models. VIII, 209 pages. 1971.
Vol. 27: I. H. Mufti, Computational Methods in Optimal Control Vol. 56: Mohamed A EI-Hodiri, Constrained Extrema. Introduction
Problems. IV, 45 pages. 1970. to the Differentiable Case with Economic Applications. III, 130
Vol. 28: Theoretical Approaches to Non-Numerical Problem Sol- pages. 1971.
ving. Edited by R. B. Banerji and M. D. Mesarovic. VI, 466 pages. Vol. 57: E. Freund, Zeitvariable MehrgroBensysteme. VIII, 160 Sei-
1970. ten. 1971.
Vol. 29: S. E. Elmaghraby, Some Network Models in Management Vol. 58: P. B. Hagelschuer, Theorie der linearen Dekomposition.
Science. III, 176 pages. 1970, VII, 191 Seiten. 1971.

continuation on page 571

Lectu re Notes
in Economics and
Mathematical Systems
Managing Editors: M. Beckmann and H. P. Kunzi


Multiple Criteria Problem Solving

Proceedings of a Conference
Buffalo, N.Y. (U.S.A.), August 22-26,1977

Edited by S. Zionts

Berlin Heidelberg New York 1978
Editorial Board
H. Albach A. V. Balakrishnan M. Beckmann (Managing Editor)
P. Dhrymes J. Green W. Hildenbrand W. Krelle
H. P. Kunzi (Managing Editor) K. Ritter R. Sato H. Schelbert
P. Schonfeld

Managing Editors
Prof. Dr. M. Beckmann Prof. Dr. H. P. Kunzi
Brown University Universitat Zurich
Providence, RI 02912/USA 8090 ZUrich/Schweiz

Professor Stanley Zionts
State University of New York at Buffalo
School of Management
Buffalo, N. Y. 14214/USA

Sponsored by:
The European Institute for Advanced
Studies in Management, Brussels
The Office of Naval Research, and
The School of Management
State University of New York at Buffalo

AMS SUbject Classifications (1970): 90A05, 90AlO, 90A15, 90899,


ISBN-13: 978-3-540-08661-1 e-ISBN-13: 978-3-642-46368-6

DOl: 10.1007/978-3-642-46368-6

This work is subject to copyright. All rights are reserved, whether the whole
or part of the material is concerned, specifically those of translation, re-
printing, re-use of illustrations, broadcasting, reproduction by photocopying
machine or similar means, and storage in data banks. Under § 54 of the
German Copyright Law where copies are made for other than private use,
a fee is payable to the publisher, the amount of the fee to be determined by
agreement with the publisher.
© by Springer-Verlag Berlin Heidelberg 1978

The objective of this conference was to foster a healthy exchange
of ideas and experience in the domain of multiple criteria problem
solving. This conference was an outgrowth of an earlier conference
I organized with Herve Thiriez at CESA, Jouy-en-Josas, France in 1975
during my stay at the European Institute in Brussels. When I re-
joined the State University of New York at Buffalo that year, I be-
gan to search for potential sponsors for this conference. Approxi-
mately one year later when the prospects began to look promising, I
contacted several individuals to act as an informal coordinating
committee for the conference. I wanted to avoid biasing the con-
ference completely to my way of thinking! The members of this
committee were Jim Dyer, Peter Fishburn, Ralph Kee.ney, Bernard Roy
(Universite de Paris IX Dauphine who was unable to participate in
the conference), and Milan Zeleny. Though the committee did not
meet, per se, their inputs regarding format, possible participants,
number of participants, length of the conference, and so on were of
great value to me in planning and organizing the conference. I wish
to acknowledge the contributions of this group.
We were most fortunate in obtaining the financial support of
the European Institute for Advanced Studies in Management, Brussels
·(one of the sponsors of the Jouy-en-Josas conference), the Office
of Naval Research, and the State University of New York at Buffalo.
In addition we were fortunate to have the participation of Neal
Glassman and Randy Simpson,two individuals from the Office of Naval
Research, at the conference. Their presence enhanced the conference.
In addition to expressing my appreciation to all the participants
for their enthusiastic activity in and support of the conference,
I especially want to thank the speakers, session chairmen, and dis-
cussants. I also thank Mr. Nelson K. Upton, Administrator, Continu-
ing Education for Management the School of Management of the State
University of New York at Buffalo, and Mrs. Marcia Livent and Mr.
Dennis Dracup of his office. They very capably handled numerous
details of the conference organization. Mrs. Marie Huber, my
secretary, handled in a cheerful and efficient manner many of the
problems and frustrations that arose. In addition to the help of
some of the participants who provided local transportation for
conference participants, I thank my wife, Terri, for pitching in
with whatever had to be done and helping to make the conference
run so smoothly.
Stanley Zionts
Buffalo, New York
November, 1977
Conclusion to the Conference
by Tomas Gal
The chairman of the last session, ~rofessor E. Johnsen, has asked
me to say a few words to close the conference. I am honored and will
try to do my best.
"Nothing is perfect" said some of the great ancient Greeks. So
it is with this conference. At the very beginning there arose some
difficulties with the Ellicott complex. The first morning the chair-
man and a group of participants were on the second floor of a building
trying staircases, locked doors, elevators and all possible (3-dimen-
sional) directions in order to find the conference room. Since the
problem of getting lost in the Ellicott complex remained, somebody
proposed the following: Let a beginning student at the university find
his way through the complex. Since this will take at least 4 years, he
may pass his exams by showing the way through the complex to partici-
pants of various conferences held at Ellicott and thereby obtain his
degree. A participant trying to find his way and asking somebody
where he is, could get the answer as it was published in a cartoon in
"The New Yorker" September 5, 1977 (p. 24). (Nowhere!)
Some of the participants were afraid that there would be problems
in finding something to do during the evenings. However, it turned
out that the program schedule was sufficiently tight that many
participants were falling asleep on t~e way to their rooms, and those
with a higher energy level, always found an opportunity to go some-
where to drink and socialize.
Somehow or other most of the participants (including the session
chairmen) showed up red-eyed and a little late for the first sessions
in the morningl
On the other hand, the sessions were very interesting and the
participants actively participated and enjoyed the conference. Except
for the slow starts in the morning, attendance at all sessions,
including Tuesday evening, was high.
The participants were a group of well known and highly
talented professionals who have the common goal of studying Multiple
Criterion Problems (MCP) with others. Some of them approach MCP
from the viewpoint of mathematical programming, others from the
utility theory viewpoint or decision theory viewpoint, and still
others from the psychological viewpoint. Last but not least, there
are a few courageous souls whom we all admire (as Jim Dyer said) who
deal with real applications in various fields. I dare say that we
have all learned a lot, not only in our own field of interest, but -
and this is encouraging - we became acquainted with new viewpoints and
new approaches. This has been only one of many advantages of the
As I mentioned before and as we all know, the program was quite
full. In spite of this, the director of the conference, Stan Zionts,
was able to arrange mind-relaxing activities such as an outing to
Niagara Falls. He solved the problems of transportation as well as
other problems which were not his responsibility.
All participants were highly satisfied with the scientific gain
of the conference and also - as it turned out - with its organization
(Note that Stan does not have an Ellicott Complex!).
Some of us had already met in Jouy-en-Josas and now we have be-
come acquainted with other interesting and pleasant people. This
is another of the positive results of the conference!
Hence, the conference was a success, not only for all participants
and speakers, but also for Professor Stanley Zionts who surely worked
hard and spent a lot of time in preparation. What I would like to
say in a highly sophisticated English (which I am unfortunately not
able to do) is to express our gratitude in the name of all
participants. We very much appreciate the work and time Stan in-
vested in this successful and pleasant conference.
"Interpolation Independence" • • • • • • 1
"A Multiple Criteria Decision Model for
Repeated Choice Situations" • • • • • • 8
"Evaluating Joint Life-Saving Activities under Uncertainty". 23
"The Interactive Surrogate Worth Trade-off Method
for Mu1tiobjective Decision-Making" • • • • • • • • 42
"Cardinal Preference Aggregation--Ru1es for the
Case of Certainty" • • • • • • • • • • • • • • • 68
"A Simple Multi-Attribute Utility Procedure for
Evaluation ". . . . . . . . . . . . . . . . 87
"Public Investment Decision Making with Multiple
Criteria1 An Example of University Planning" •• 116
"Interdependent Criteria in Utility Analysis" • • • • • • • • 131
"A Survey of Mu1tiattribute/Mu1ticriterion Evaluation
Theories". . . • • . • • . . . . . . • . . . . . . . . •. 181
"An Overview of Recent Results in Multiple Criteria
Problem Solving as Developed in Aachen, Germany" • • 225
"Bicriterion Cluster Analysis as an Exploration Tool". 249
"Duality in Multiple Objective Linear Programming" • • • • • 274
"Mu1tiobjective Management of the small firm" . • • • • • • • 286
"Applying Mu1tiobjective Decision Analysis to
Resource Allocation Planning Problems" •• •• • • • • • 299
"A Utility Model for Product Positioning" • • • • • • • • • • 321

*The papers are alphabetically ordered according to the names of the

author or first co-author. A rather comprehensive bibliography of
Multiple Criteria Problem Solving may be found at the end of the
article by Peter Fishburn.


"Social Decision Analysis Using Multiattribute
Utility Theory" • • • • • • • • • • • • • • • • • • • 335
"Ranking with Multiple Objectives" • • • • • • • • • • • • 345
"Interactive Integer Goal Progranuning:
Methods and Application" • • • • • . • • . • • • • • • • • 362
"A Theory of Naive Weights" • • • • • • . . • • • • • • • • 384
"Multicriteria Decision Aid
Two Applications in Education Management" • • • • • . • 402
"Multiattribute Risk/Benefit Analysis of Citizen Attitudes
Toward Societal Issues Involving Technology" 424
"Condensing Multiple Criteria" • • • . • . • • • • • • . • 449
"vector Maximum Gradient Cone Contraction Techniques" • • • 462
"An Approach to Solving Multi-Person Multiple-
Criteria DeCision-Making Problems" 482
"Multiple Criteria Dominance Models:
An Empirical Study of Investment Preferences" • • • • • • • 494
"Toward Second Order Game Problems:
Decision Dynamics in Gaming Phenomena" • . . • • • • • . • 509
"Multidimensional Measure of Risk: Prospect Rating
Vector" • • • • • • ••• 529
"A Time Sharing Computer Progranuning Application of
A Multiple Cr~teria Decision Method to Energy Planning
A Progress Report" 549
David E. Bell
Harvard Business School
Boston, Mass. 02163

The purpose of the paper is to describe the decompositions of
multiattribute cardinal utility functions which result from assumptions
based on "interpolation independence." The decompositions involve only
single attribute marginal utility functions, together with constants,
and are stronger than other decompositions of this type. Proofs are
not given but may be obtained from references.

In order to assess multiattribute cardinal utility functions with a
decision maker who has limited time and patience it is essential that
some good approximation be found that involves only a modest amount of
questioning of the decision maker and yet makes him feel that the re-
sulting approximation is reliable. Keeney, in particular, has found
that utility independence and preferential independence are properties
which can often be shown to hold between attributes for a decision
maker, or that they hold sufficiently well to make little difference.
Sensitivity analysis can often clear up any doubts about the decision
which is ultimately recommended. But sometimes such properties will
not hold, and perhaps the analyst or the decision maker may not know
when "close" is close enough.
There are a growing number of decompositions which reduce multi-
attribute utility functions to functions of lower dimension marginal
utility functions (3 - 10), and this paper will exhibit another. The
assumptions are chosen with a view to being transparent to the decision
maker and hopefully more easily acceptable. The assessment effort is
little more than required by existing techniques and involves only
single attribute utility functions and constants.

Section one, the two-attribute case, is based upon Section one of

[1] and Section two, the general case, is taken from [2]. As the
proofs are somewhat long only very brief sketches are given here.

1. The Two-Attribute Case

Consider a utility function u(x, y) over a set XxY. An assumption
that will be made throughout this paper is that there are values xC, x*
of X and yO, y* of Y such that (x*, y) is a strictly preferred outcome
to (xO, y) for all y in Y and that (x, y*) is strictly preferred to (x,
yO) for all x in X. Except in situations like generalized utility in-
dependence (Fishburn and Keeney [7]) this property should be present in
the majority of cases. The idea of interpolation independence will
seem more plausible, however, if x*, y* are each the most preferred
values of X and Y and x ,y
° ° the least preferred, at least of those
likely to be relevant to the problem under study.
With this mild assumption the following identity provides a basis
from which to consider assessment.
u(x, y) = u(x, yO) + (u(x, y*) _ u(x, yO»).[U(X, y) - u(x, yO)] (1.1)
u(x, y*) u(x, yO)
The square brackets enclose the conditional utility function u(Ylx)
which is merely u(x, y) rescaled so that u(yOlx) = ° and u(y*lx) =1
for all X. The condition, Y utility independent of X, is thus express-
able as u(ylx) = u(ylxO) u(ylx*), in other words the conditional
utility function of Y is independent of x. The procedure for testing
the presence of utility independence is essentially to select two
values of X, xO and x*, for example, assessing u(ylxO) and u(ylx*) and
comparing. Even if they are not the same, (1.1) shows that by assess-
ing u(x, yO), u(x, y*) and u(Ylx) the function u(x, y) is determined.
The function u(ylx) is still difficult to assess. In theory it re-
quires the assessment of one single attribute function u(ylx) for each
possible value x of X. However, if the preferences are reasonably
continuous it can be expected that values of x which are close will

lead to conditional functions that are close. Hence it may make sense
to choose a selection of x values, x 1 , x,
2 ... ,

each i and then define remaining functions by interpolation from these.

For example, if xi < x < x i +1 the decision maker may feel that

x _ xi i+l
u(ylx) = I
U (y X i+ 1) + x i+l - x i u (I
i+l i
x - x x - x
is accurate enough.
Suppose that, having assessed u(ylxO) and u(ylx*) and discovered
that they are not identical the decision maker is prepared to accept
the idea of defining u(ylx) in general by interpolation as follows:
u(Ylx) = >..(x)u(ylx*) + (1 - >,,(x»)u(ylxO) (1.2)
then I will say that Y is interpolation independent of X. The special
case >"(x) = constant is utility independence; if X is representing many
attributes then (1.2) is an example of Kirkwood's parametric indepen-
dence [91; if X and Y represent a partition of a time stream, x = (xl'
x 2 " .. , x t ), y = (x t + l ' x t + 2 ' ... , xT ) them Meyer [101 would call >..(x) a
backward state descriptor.
Y interpolation independent of X (Y II X) does not necessarily im-
ply that X II Y. However for a restricted set of functions >..(x) it is
the case.
Result 1: If Y II X, then X II Y also if and only if for some value k,
[k - u(x*, yO»)u(xl yO) + (1 - k)u(xly*)
>.. (x) =
u(x, y*) _ u(x, yO)
where u(xly) is the conditional utility function of X on Y.
Since a decision maker who finds (1.2) acceptable will likely also ap-
prove of X II Y the corollary of Result 1 is:
Result 2: X and Yare mutually interpolation independent (X MIl Y)
if and only if

u(x, y) a o + alu(xlyo) + a 2u(ylxo) _ ku(xlyO)u(ylxo)

+ (k - al)u(xlyo)u(ylx*) + (k - a 2 )u(xly*)u(ylxo) (1.3)

+ (a 12 - k)u(xly*)u(ylx*)
where a O (°
u x ,y 0) , a l -_ u (* x ,y *) , a 12
x ,y 0) , a 2 -_ u (0 u(x*,
y*) all of which, including k, are independent constants.
The proofs derive naturally from comparison of (1.1) and (1.2) with
their symmetries. Note that special cases of (1.3) include the bi-
lateral independence condition of Fishburn [5] if k = a l a 2/(a 12 - a l -
a 2 ) and also his recent generalized multiplicative form r6] if k = a 12 .
One way utility independence, say X UI Y, is a special case if u(xlyo)=
u(xly*) and mutual utility independence if also u(ylxo) = u(ylx*). The
additive form rather than the multiplicative form [8] results if a l +

a 2 = a O + a 12 ·
The constants a O and a 12 may be taken to be ° and 1 respectively
and a l and a 2 may be assessed directly. The constant k must be calcu-
lated indirectly by assessing a value u(x, y) and then substituting it
in (1.3) to find k. It is only indeterminable if either X UI Y or
Y UI X in which case terms in k disappear from (1.3).
Thus, interpolation independence seems to provide a straightforward
procedure for assessment. The assumptions are easy to understand and,
what may be of more importanc"e, are of a type that a numerate decision
maker can see make sense. All the refinement of other decompositions
is retained, and indeed, strictly improved, without involving any
greater degree of assessment difficulty.
2. The Multiattribute Case
There are several ways to generalize the two-attribute case to
higher dimensions. The most important property to retain is that of
requiring at most two single attribute conditional utility functions

per attribute. If u(x l ' x 2 "'" x n ) is the required utility function

over a product space of attributes Xl x X2 x ... x Xn , let Yi = Xl x
X2 x ... x Xi _ l x Xi + l x ... x Xn then the conditional functions

U(Xi!y~) and u(xi!yI) are the natural generalizations. Let a i u(XI,

y~) and ti u(x~, yI) and define the functions
(1 - ti)(a i - 8)u(x i !Yf) - aiel - 8)U(Xi!y~)
1 + ----~~~~------~~----~--------~~~
aiti + 8(1 - a i - t i )

8(1 - ti)u(xi!yf) + ai(t i - 8)u(Xi!y~)

aiti + 8(1 - a i - t i )

where e is a free constant to be determined, and u(x) is scaled so that

u(x O) = 0 and u(x*) = 1.
Result 3: For any N ~ n if Xi is mutually interpolation indepen-

dent of Yi' i = 1, 2, ... , N, then u(x l ' x 2 ' ... , Xn) =

. .. , fN
where the sum is taken over &11 2N combinations of superfixes for
the first N attributes.
The case N = n is likely to be of most use. The result may be re-

u(x l ' x 2 ' ". , xn) = a l f *l f 02 ... fn0 + a 2f2f2

0 * fO + .•.
* * 0 -* *
+ tnfl ... fn+lfn + fl '" fN

+ 8 - 8(f~ + f~)(f~ + f~) ... (f~ + f~). (2.2)

To assess this form requires the two conditional utility functions per
attribute together with 2n - 1 constants of which only 8 is obtained in-
directly. For n > 5 this may be too much to expect of a decision maker
so some further assumptions regarding the constants become desirable.
To assume that all subsets of the attributes are interpolation indepen~

dent of their complement is a natural extension and these further assum-

ptions do serve to reduce the number of independent constants in (2.2)
though the exact consequences have not been calculated to date.
A proof of Result 3 is by induction of n. The cases n = 2, N 1
and n = 2, N = 2 can be derived directly from Results 1 and 2

respectively where k is replaced by a l a 2 (1 - e)/(a l a 2 + eel - a l - a 2 )).

When N S n - 1 the case n + 1, N may be deduced directly from the case
n, N by considering Xn and Xn + l as a single attribute. The case n + 1,
N =n may be derived by using the n + 1, N =n - 1 case twice on dif-
ferent combinations of the attributes and comparing the resulting equa-
tions. The final case N n + 1 comes from examining the equations
from applying the case n + 1, N =n on different combinations of the
attributes. It is straightforward to show that (2.1) actually satis-
fies the conditions Xi MII Yi .

[ 11 Bell, D. E., "Conditional Utility Functions," Cambridge University
Engineering Department Working Paper, June 1977.
[ 21 ----------, "Multiattribute Utility Functions: Decompositions
Using Interpolation Independence," Manuscript, August 1977.
[ 31 ----------, "A Utility Function for Time Streams Having Inter-
period Dependencies," Operations Research, 25, 448-458, 1977 .
[ 41 Farquhar, P. H., "A Fractional Hypercube Decomposition Theorem
for Multiattribute Utility Functions," Operations Research,
23, 941-967, 1975.
[ 51 Fishburn, P. C., "Bernouillian Utilities for Multiple Factor
Situations" in Multiple Criteria Decision Making,
J. L. Cochrane and M. Zeleny (Eds.), University of South
Carolina Press, Columbia, S.C., 47-61, 1973.
[ 61 ----------, "Approximations of Two-Attribute Utility Functions,"
Mathematics of Operations Research, 2, 30-44, 1977.
[ 71 Fishburn, P. C., and R. L. Keeney~ "Generalized Utility Indepen-
dence and Some Implications," Operations Research, 928-940,
[ 81 Keeney, R. L., "Multiplicative Utility Functions," Operations
Research, 22, 22-34; 1974.
[ 91 Kirkwood, C. W., "Parametrically Dependent Preferences for Multi-
attributed Consequences," Operations Research, 24, 92-103,
[101 Meyer, R. F., "State Dependent Time Preference," Conflicting Ob-
jectives, D. E. Bell, R. L. Keeney, H. Raiffa (Eds.), to be
published by John Wiley & Sons, Ltd., London, 1977.
J.M. B11n,
· *
J.A. Do dson, Jr.

*~orthwestern University,
The role of multiple attributes in consumer choice has received
much attention from marketers. This literature, however, is essenti-
ally deterministic. Yet, in regard to predictive accuracy, these mod-
els have not performed as well as expected. Bass, Pessemier, and Leh-
mann (1972), in particular, have stressed the extent to which brand
switching is observed in individual behavior. Bass (1974) and Herni-
ter (1972) have proposed a model of stochastic preference to account
for brand-switching behavior. In this paper, we relate these contri-
butions to the multiattributed consumer choice models. A general frame-
work of analysis is proposed to model multiattributed consumer prefer-
ence. It is then shown that the origin of the brand-switching phenom-
enon is to be found in the interaction of the multiplicity of evalua-
tive dimensions for the choice alternatives and the consumer uncertain-
ty over salient attributes and alternative performance on these attri-
butes. Finally, the applicability of this methodology to other exis-
ting multiattribute choice models is briefly discussed.
Psychologists, marketers, political scientists, and economists
have for a long time recognized the inherent multidimensionality of a
consumption good or a social choice alternative. The mUltiplicity of
attributes has come to be considered as characteristic of most choice
situations. But recognition of the many facets of a choice object
raises some problems in trying to model the individual choice process
and in trying to predict individual choice behavior. These problems
have been most clearly identified in the area of consumer choice. How
does a consumer cope with scale heterogeneity when comparing alterna-
tive brands or choice alternatives? How does he aggregate these heter-
ogeneous criteria to come up with a preference ordering?
Many models have been suggested to describe this process. By
far the one which has received the most attention is the linear-compen-
satory model, e.g., Fishbein (1967). Application of this model to the
prediction of brand choice would suggest that the rank order of a

consumer's preference for brands should be predicted by the rank order

of the consumer's relative attitudes toward the brands. And a deter-
ministic application of the model to the prediction of actual choice
behavior would suggest that the consumer would always choose his most
preferred brand. Yet, it has been demonstrated (Bass, Pessemier, and
Lehmann, 1972) that there is a gap between the model's predictions
and the observed choice behavior of individuals. Allowance for brand-
switching behavior is rarely made. In their study Bass, Pessemier,
and Lehmann have suggested that "there is a stochastic component of
choice which arises because of variety-seeking" (1972, p.538).
In spite of the predominance of a purely deterministic view of
the consumer decision process, a number of authors have recognized
these problems. By and large, however, most of the attempts to ac-
count for the apparent "irrationality" of brand switching have been
rather recent and fragmented. In traditional economic consumer theory
for instance, all that characterizes products is that "they are goods"
as Lancaster (1966) puts it. Yet, one gets the feeling that the con-
cepts of substitutes, complements, and elasticity of substitution is a
roundabout way of attempting to accommodate the multiplicity of pro-
duct attributes in a unidimensional utility-based model. To capture
the multidimensional nature of goods as a basis for consumer choice,
Lancaster has suggested that characteristics of goods be the direct
objects of utility, rather than goods themselves. Preference order-
ings are assumed to rank collections of goods indirectly, through the
attributes they possess. If each good is characterized by the rela-
tive amount of a fixed number of attributes it possesses, they appear,
for modeling purposes, as activity vectors in the attribute space.
Convex combinations of these points represent the combination of ex
isting commmodities in certain proportions. If we define the utility
function over the attribute space, a utility maximizing solution may
require the simultaneous of two or more goods in suitable
amounts to achieve that attribute combination considered best. If one
is willing to assume taste invariance for the consumer over a certain
horizon, he can interpret such a mixed solution as a basis for ob-
served brand switching over the same horizon. With this interpreta-
tion, brand switching appears totally deterministic. It is simply a
way of overcoming the intrinsic finiteness of the goods space when
evaluations are made in a continuous attribute space.

Quite a different view is proposed by Theil (1974) in his

"theory of rational random behavior." He makes a distinction between
the planning and the implementation stages of the choice process and
attributes the stochastic nature of consumer choice to the random
shocks, previously unscheduled factors, which intervene between the
consumer's preferred choice and her actual choice. For instance, "In
the housewife's case," as Theil explains, "such factors include her
mood when she is on a shopping trip, whether she meets a friend, the
availability of favorite brands, etc." (Theil, 1974, p.310). Thus,
the apparent "irrationality" of brand switching is created merely be-
cause the consumer is unable to implement her preference; if able, she
would always choose her most preferred brand. Brand loyal consumers
apparently are not ~ frustrated.
These explanations of brand switching are found lacking in sev-
eral respects. Theil's brand switching remains external to the choice
process. In his model of rational behavior, a random error component
is added, much as econometricians include an error term to account for
all of the random factors not incorporated into the model. Lancas-
ter's theory remains deterministic and faults the market for not pro-
viding an infinity of goods to satisfy consumer needs, forcing him to
follow a mixed strategy in order to get his desired combination of
attributes. Neither model recognizes the uncertainty faced by the
consumer in trying to evaluate choice alternatives.
Consumers make choices without complete information. Uncertain-
ty may occur because of misinformation about choice alternatives. In
the context of multiattribute choice models, uncertainty exists on
brand ratings, alternative's attributes and/or the salience of the
attributes. Psychologists have for some time recognized that man
lives in a continual state of uncertainty. Some have suggested that a
failure to recognize and adapt will be fatal. This argument carried
into consumer behavior suggests that switching behavior may be for the
purpose of acquiring information needed for efficient adaptation
(Pessemier, 1975).
At this point, mention should also be made of the probabilistic
individual choice theories developed by mathematical psychologists and
psychometricians, in the hope of explaining observed intransitive
choice behavior. For instance, Tversky's (1972) elimination-by-aspect
model (EBA) is an attempt to integrate the mUltiplicity of attributes

of choice objects in a probabilistic choice model. In his theory,

each alternative is viewed as a set of aspects. At each stage in the
decision process, an aspect is selected (with probability proportional
to its weight), and all alternatives that do not include the selected
aspect are eliminated. This model and much of the literature incor-
porating probabilistic notions of choice have been attempts to circum-
vent the problem posed by the principle of independence from irrele-
vant alternatives which can create intransitive preference orderings.
Apparently, few have recognized that the problem can be explained by
the multiplicity of attributes. l
In this paper, we propose to incorporate uncertainty into a
multiattribute choice model and derive its implications for consumer
behavior and brand switching. Section 2 develops the basic multiattri-
bute choice model used throughout the paper and characterizes deter-
ministic solutions. The model uses ordinal data as inputs, thus elim-
inating the problem of scale heterogeneity. Uncertainty over goods
and attributes is introduced in Section 3, where stochastic solutions
are derived and the issue of how to validate the model is discussed.
Finally, the applicability of our methodology to other multiattribute
choice models is briefly discussed.
We now develop a model for determining an individual's aggre-
gate preference ordering on a set of stimuli which are located in a
prespecified multidimensional space. Since the analysis does not in-
volve any comparisons across individuals, the procedure can be car-
ried out for each individual separately.
2.1 Definition and Notation
We adopt the following notation:
B = (b l ,b 2 , ••• ,b i , .•• ,bmJ is the brand set from which the
consumer chooses.
A = (a 1 ,a 2 , •.• ,ak , •.• ,anJ is the attribute set,taken as given. 2
P is a (rnxm) permutation matrix representing a preference rank-
ing (aggregate or for some attribute). In general, a permuta-
tion matrix P has a one in each row and column and zeroes
1. May's (1954) paper does recognize this point.
2. While the brand set and attribute set are taken as given, their
determination is an interesting question which is not addressed

everywhere else. For instance, if B = Cb l ,b Z ,b 3} and a con-

sumer's (aggregate) preference ranking is b l >b 3>b Z (> pre-
ferred to), we write: r~ 0 O~l
(Z.l) (b l b Zb 3 ) P = b l > b 3 > b Z with P = Lg ~

g denotes the set of all permutation matrices or order m

(Igl = m!) i.e., the set of all conceivable rankings of the m
choice objects.
~ denotes the set of all doubly-stochastic matrices, i.e., ma-
trices with all elements between 0 and 1 and with row and col-
umn sums all equal to one (i.e., see equation Z.Z below).
Furthermore, we note that the convex hull of g denoted c(g) spans the
set ~ (Birkhoff [1946] and von Neumann [1953]). In other words, ~
generates a convex polytope in RmZ, the vertices of which are the
permutations PEg. 3 A weak ranking with one tied pair corresponds to an
edge (a face of order one) of that polytope, the edge linking the two
strict rankings compatible with that weak ranking, e.g. b l >b 3>b Z------
(b l ,b 3 ) > b Z ----- b 3>b l >b Z ' If more than two elements are tied, the
weak ranking is a face of order h > 1. In the sequel, a ranking is
used in the weak sense, i.e., with ties allowed, unless specified
Z.Z Model Specification
The following inputs are used to determine the individual's ag-
gregate preference ranking:
1) A set of n attributewise preference rankings say P1 ,P2"'"
Pk ,· .• ,Pn (PkEg). If the consumer ranking is weak, ties can be accom-
modated by entering ffor the entries of the permutation matrix
corresponding to the ~-th tie class (t is the number of brands in
that class). For instance, if the ranking is (b 1 ,b 3 »b Z (b l and b 3
are tied), we write:

(2.2) P = [l~Z l/Z

l/Z l/Z
Note that the row and column sums are still all 1 as in the strict
ranking case (2.1). When such a property holds, we say that we have a
doubly stochastic matrix. Formally, S is doubly stochastic whenever
3. The dimension of ~ is at most (m_l)2.

s, ,=1
L ~J
(2.3) m

s, ,=1

0 < s, , -< 1
- ~J

2) A set of n normalized weights, one for each attribute, de-

noted wl ,w2 , •.• ,wk , .•• ,wn • Weights can be determined externally by
direct consumer self-explication or through a variety of known proce-
dures, Srinivasan and Shocker (1973), Pekelman and Sen (1974),4 Any
of these procedures may be used to provide weights which become input
to the model.

(2.4) L
Any aggregation process must deal with the problem of scale
heterogeneity. To resolve it, we require only ordinal evaluations of
the brands. In processing these various attributewise preferences, we
hypothesize that the consumer follows a linear aggregation process 5
(2.5) S = L
In general, S is a doubly-stochastic matrix in~. For instance, if
B = (b l b 2b 3 ), A (a l ,a 2 ,a 3 ) and wl =w2= 1/4, w = 1/2,

~ ~l ~ ~ ~0 ~
0 1 0
P = 1 P = 0 0
1 0 2 0
p3 -
(b l b 2b 3 ) (b 2b l b 3 ) (b b b )
2 3 1

(2.6) S = 1/4 PI + 1/4 P2 + 1/2 P3 = 3/4

1/2 1/2
['1' I~J
4. The model can also be used for posterior weight estimation by com-
paring the stated aggregate preference ranking with the model-
predicted ranking.
5. Although alternative processing models have been proposed, the
linear compensatory hypothesis has played a central role in the
multiattribute choice literature. Empirical support for the
hypothesis has been reviewed by Slovic and Lichtenstein (1971).

To make a decision, that is, to choose a brand on the basis of these

mUltiple evaluations, the consumer picks an aggregate preference rank-
ing, say P * , which "best" approximates the doubly-stochastic ranking S •
A class of solutions to this problem is afforded by the follow-
ing formulation:
(2.7) Min d(P,S)
where d is some distance in~. In particular, it can be shown (Blin,
1976) (i) that the Euclidean and city-block metric both lead to the
same solution, and (ii) that this is also the solution to the follow-
ing linear assignment problem: 6
(2.8) Max
I Sij Pij
2.3 A Geometric Interpretation
The minimal distance problem in equation (2.7) seeks the brand
preference ranking which most clearly matches the aggregated attribute-
wise rankings. It also leads to a convenient geometrical representa-
tion of the problem. In the case where m=3 and n=3, the problem has a
simple two-dimensional representation (Figure 1).7 If we choose the
city-block metric, we write equation (2.7) as

(2.9) Min

In our previous example, the solution is

U ~]
P* 0 i.e. b2>b 3>b l
and z(P * ) = 3/4 + 1/2 + 1/2 =
Geometrically, this solution can be found by projecting S orthogonally
6. In view of this equivalence of solutions, the assignment formula-
tion provides a convenient and well-documented solution procedure
for implementation purposes. For a general discussion of an appli-
cation of this formulation in a marketing context, see Bernardo
and Blin (1975). Furthermore, this formulation defines a special
linear assignment since the Sij' the "profit" matrix, has a speci-
al structure, i.e., its rows and columns all sum to one.
7. As a 3 2=9 dimensional representation is impossible, we use this
(strictly speaking) ill-dimensioned figure to help understand the
rationale for the solution concept.

\ c ./
. /
\ .
p ** ~ _
.. -._ _. -"\.-'_. -.-. -,-
.......... s ,. . (b b b )
. \
l Z 3

b 3>b Z>b l

~ ~ g]

Figure 1. Geometrical Representation for ~3 x 3

the closest (in the city-block or Euclidean sense) face of the poly-
tope ~, and repeat this procedure until we reach a face of order 0,
that is, a vertex (b Z>b 3>b l , here); or until no improvement in z can
be found, if there are several optima. In the latter case, we could
have P** ,say if b l and b 3 were tied for second place (see Figure 1).
If all three brands were tied, we would get point C in the center of
this polytope. Again, referring to Figure 1 above, we see that the
wedge-shaped portions centered at C delineate the regions of ~ which
would lead to that vertex being chosen.

2.4 The Deterministic Solution

At this point, a purely deterministic view of consumer choice
would lead us to predict that the brand chosen by the consumer would
be the top-ranked brand in P* (where P* is the solution to problem
(2.7) for some choice of metric d). However, it must be noted that
this holds only because we restrict our solution space to @, the set
of nonstochastic orderings. If we allow for repeated purchasing of
various brands of the same product, then the stochastic ordering S may
be a feasible consumer strategy. In a sense, it amounts to a mixed
strategy where the weights assigned to each nonstochastic ordering
(and its corresponding leading brand) are such as to yield S:
(2.10) S I Ah Ph where I Ah 1 and 0 < Ah < 1 .
h=l h
In general, however, this interpretation is of Little help for predic-
tive purposes since there is no unique set of weights A which generate
S. All that can be stated is an upper bound on the dimensionality of
the spanning set Ph needed to yield S. Namely, since ~ has dimension
2 2
(m-l) , we must have '- ~ (m-1) + 1. In this sense, the mixed stra-
tegy interpretation, which has often been informally suggested by re-
searchers in the brand-switching area, is consistent with our model
but is insufficient for predictive purposes. Moreoever, and more gen-
erally, some basic sources of uncertainty in the information inputs
available to the consumer have yet to be introduced. We now examine
the role of these factors in brand switching.


In spite of the wide acceptance of the role of multiple attri-
butes in consumer decisions, some researchers realize the information
gathering and processing burden this class of models imposes on the
consumer. In fact, this argument is often alluded to in explaining
the apparent lack of predictive success of such models. Realistical-
ly, at least three basic sources of uncertainty are faced by the con-
sumer: (i) which attributes are relevant to his choice; (ii) what
weight should they be given; and (iii) how does each choice alterna-
tive perform on each attribute scale. Formally, this means that both
the attribute weights wk and the attribute preference ordering Pk are
random variables whose distribution reflects the amount of uncertainty

faced by the consumer. Then

S =
(3.1) L..
is also a random raviable in ~, whose distribution depends on the dis-
tribution of n, wk ' and Pk •

3.1 The Stochastic Solution

Given that the consumer chooses his most preferred brand, the
density pattern of S provides a fixed probability of choice for each
brand. Specifically, in terms of our minimal distance solution con-
cept, we can partition ~ into a mutually exclusive and jointly exhaus-
tive regions, one for each ranking with brand i leading (i=1,2, .•• ,m)
as the solution vertex, for some d. For instance, if d is the city-
block (or Euclidean) metric, we would have the situation represented
in Figure 2. Integration of the density function of S over each choice
region yields probabilities of choice of each brand b .• Practically,
these probabilities define a multinomial process for choice over B,
the brand set, and the probability of choice of brand i, say TIi' is a
function of the "bi-chosen" region and the shape and location of the
density pattern. It is interesting to note that this multinomial pro-
cess appears as a basic assumption underlying recent work modeling
brand choice, e.g., Bass (1974); Bass, Jeuland, and Wright (1976);
and Herniter (1973).

3.2. Model Validation

To be useful in predicting choice, the model must be amenable
to empirical testing. A deterministic application of the model requires
only (i) an individual's set of preference rankings on each attribute
and (ii) an individual's weighting of the attributes. With this data
it is possible to find that ranking of the brands which comes closest
to, in an already specified sense, the individual's ranking. Thus, it
is possible to test the model by comparing the predicted against the
stated rank order of the brand set across individuals. However, as
has been argued, this ignores the stochastic nature of consumer choice
behavior. What is needed is good predictions of actual frequency of
choice. If switching behavior were solely attributable to the multi-
plicity of attributes, then the frequency of choice for each brand

. "bl-chosen"
"b 2-c h } ,
osen . region
region \.
Density of S

Figure 2. Geometrical Representation of Uncertainty Over The

Choice Regions
would be predicted by the first column of the S matrix. 8 However,
this solution doesn't recognize the uncertainty in the process. In
theory, all that is required to estimate the choice probabilities, IT.,
is the joint distribution of n, Pk , and wk ' i.e., the density function
of S.
Eliciting the density function from an individual would be an
arduous,if not impossible, task. However, the individual does provide
some information about his uncertainty in the input data. By allowing
8. The elements of S provide information about the lack of concor-
dance in the attributewise rankings. Assuming equal weights, the
ijth elements of S yields the relative frequency with which the
ith brand is ranked in the jth position among the attributes.
Thus, the jth column of S provides an estimate of the probability
of each brand obtaining that position in the aggregate ranking.
In essence, column one represents the expected relative frequency
of choice if the individual followed a mixed strategy.

the consumer to express ties in the attributewise rankings, Pk , it is

possible to approximate the density pattern of S by a discrete distri-
bution. Each tie represents uncertainty about the appropriate posi-
tioning of the tied brands on a given attribute. Without additional
information, each strict ranking consistent with the weak ranking
would be assumed equally probable. For instance, in the case of com-
plete uncertainty, i.e., all brands tied on each attribute, then it
can easily be shown (B1in, 1976) that assuming each strict Pk , order-
ing as equally likely for all k (and wk equal), yields a uniform dis-
tribution for S over ~ and
l/n l/n. l/n
l/n l/n. l/n
(3.2) E[S]

l/n l/n l/n

Thus, consumer revelation of uncertainty appears in the attri-

butewise rankings, Pk's. In order to evaluate the distribution of S,
we then need to consider each strict ordering compatible with a given
weak ordering on each attribute and compute S for each such strict
ordering. To compute the predicted outcome under our model, we would
solve the resulting problem Min d(S,P) for each different S obtained.
By looking at the brand chosen first in the model, we would be able
to assign frequencies of occurrence to S in each bi-chosen region re-
sulting from the solution of the associated linear assignment problem.
These frequencies would then be used to compute the ITi' probability of
brand i. In the absence of any alternative heuristic way to determine
the distribution of S, it appears that this solution would be prefer-
able, as it assumes very little about the consumer's preferences.

Many explanations have been offered for the observed brand
switching which characterizes much of consumer choice. In this paper,
we propose a model which relies on weaker input data than required in
many mu1tiattributed choice models. It has also provided a vehicle
for integrating the mUltiplicity of attributes and evaluative uncer-
tainty. Integration of these factors into a model of rational choice
is consistent with observed individual behavior.

It has been demonstrated that the origin of the brand-switching

phenomenon can be explained by the interaction of two factors (i) the
mUltiplicity of evaluative dimensions for the brands, and (ii) the
consumer uncertainty over salient attributes and product performance
on these attributes. Also, we stress that our analysis integrates two
apparently divergent views of consumer choice: the deterministic view
which has evolved around multiple attribute choice models and the
stochastic view which has been proposed to explain brand switching.
Far from being a random element superimposed over an intrinsically
multiattributed evaluation process, brand switching appears in our
model as a perfectly rational strategy for a consumer faced with
imperfect knowledge about multiattributed brands.
It is clear that the impact of introducing uncertainty is not
limited by this particular model class. The implications drawn here
are independent of the specific model chosen. The methodology can
and should be applied to other models of the choice process. Viewed
in this light, Tversky's EBA model represents the introduction of
uncertainty into the basic lexicographic model.

Support for this research was provided by NSF Grant Eng #75-

1. Bass, F. M-., "The '.rheory of Stochastic Preference and Brand

Switching," J. of Mktg. Res. 11, No.1 (Feb. 1974), pp.1-20.
2. Bass, F. M., A. Jeuland, and G. Wright, "Equilibrium Stochastic
Choice and Market Penetration Theories: Derivation and
Comparisons," Mgmt. Sc., 22, No. 10, (June 1970).
3. Bass, F. M., E. A. Pessemier, and D. R. Lehmann, "An Experimental
Study of Relationships Between Attitudes, Brand Preference,
and Choice," Beh. Sc., 17, No.6 (Nov. 1972), pp.532-541.
4. Bernardo, J. and J. M. Blin, "A Mathematical Model of Consumer
Choice Among Multi-attributed Brands," J. of Consumer
Res., (Sept. 1977).

5. Birkhoff, G., "Tres observaciones sobre el algebra lineal,"

Universidad Nacional de Tucuman. Revista. Series A., i, (1946)
6. Blin, J. M., "A Linear Assignment Formulation of the Multi-Attri-
bute Decision Problem," Revue Francaise d'Automatigue
d'Informatigue et de Recherche Operationnelle, ~, No. 2
(June 1976).
7. Fishbein, M., "Attitude and the Prediction of Behavior," in
M. Fishbein (ed.) Readings in Attitude Theory. New York:
Wiley 1967, pp.477-492.
8. Herniter, J., "An Entropy Model of Brand Purchase Behavior,"
J. of Mktg. Res., 10 (Nov. 1973), pp.36l-375.
9. Lancaster, K. J., "A New Approach to Consumer Theory,"
J. of Pol. Ec., 74 (1966), pp.132-l57.
10. May. K. 0., "Intransitivity, Utility and the Aggregation of Pref-
erence Patterns," Econometrica, 22 (Jan. 1954), pp.1-13.
11. Pekelman, D. and S. K. Sen, "Mathematical Programming Models for
the Determination of Attribute Weights," Mgmt. Sc., 20
(April 1974), pp.12l7-l229. -
12. Pessemier, E. A., "Market Darwinism, Choice Theory and Marketing
Models," in Mazze, Edward M. (ed.) 1975 Combined Proceedings,
Am. Mktg. Assoc. (Aug. 1975), pp.27-30.
13. Slovic, P. and S. Lichtenstein, "Comparison of Bayesian and
Regression Approaches to the Study of Information Processing
in Judgment," Drg. Beh. and Human Performance, ~, No.6
(Nov. 1971), pp.649-744.
14. Srinivasan, V. and A. D. Shocker, "Linear Programming Techniques
for Multidimensional Analysis of Preferences," Psychometrika,
38, No.3 (Sept. 1973), pp.337-369.

15. Theil, H., "A Theory of Rational Random Behavior," J. of the

Am. Stat. Assoc. 69, No. 346 (June 1974), pp.310-314.
16. Tversky, A., "E1imination by Aspects: A Theory of Choice,"
Psych. Rev., 79, No.4 (July 1972), pp.281-299.
17. von Neumann, J., "A Certain Zero-Sum Two-Person Game Equivalent
To The Optimal Assignment Prob1em" in H. W. Kuhn and A. W. Tucker
(eds.) Contributions to the Theory of Games, 1, Princeton
Univ. Press, 1953, pp.5-12.

Samuel E. Bodily

Colgate Darden Graduate School of Business Administration

university of Virginia, Charlottesville, Virginia


Man's increased awareness of and control over his own safety

has introduced puzzling questions concerning the value of life.
The common practice of comparing projects on the basis of cost per
expected life saved has two major inadequacies. First, the value
of a change in the risk of death may depend on the level of risk.
Secondly, there may be strong interdependencies in the value of
risk avoidance to the members of a group. An approach which com-
bines the preferences of individuals for wealth and life-death con-
sequences through a collective utility function is suggested to
overcome these deficiencies. A concept of collective risk aversion
is introduced to account for differences between collective and in-
dividual risk preferences.

1. Introduction

Technical innovation has greatly affected the control man has

over his own safety, and society has reacted with increasing ex-
pectations for reductions in the risk to life and limb. It is im-
possible to eliminate all risks to man and any program for risk re-
duction has a price tag. Thus the degree of safety is largely a
matter of public willingness to pay for safety.
Consider the following sampling of public debate involving

the evaluation of joint life-saving activities:

eEnergy Should nuclear power plants be allowed? If

so, what safety standards should apply to

eEnvironment How much should be spent to reduce the detri-

mental effects of pollution?

eMedical At what price do we stop buying complicated

medical procedures to prolong life?

eSafety How safe is safe enough in matters such as

the licensing of drugs, worker safety or
transportation safety?

A convenient and popular method for comparing life-saving pro-

grams is to rank them on the basis of least cost per expected life
saved. Leaving aside the fact that non-monetary and non-fatality
differences are ignored, the notion that there is a value for an
"expected life" is unacceptable for at least two reasons.
First, it ignores the possibility that the value of reducing
risk may depend on the level of risk. In conditions of uncer-
tainty, a life-saving program does not buy expected lives, but
rather a reduction in the probability of death for a certain set
of individuals. The value of the program may depend on the risk
category of those individuals. It may be justifiable, for exam-
ple, to spend more to reduce the probability of death by a given
amount for high-risk individuals than for low-risk individuals.
Secondly, this approach ignores interdependencies of risks
to individuals. Suppose each individual in a group of people is
exposed to the same probability of death from two separate sources:
a nuclear explosion which can kill everyone and automobile acci-
dents where fatalities occur singly in separate incidents. Using
the cost per expected life saved, the group should pay the same
amount to eliminate these two hazards. Yet there may be very good
reasons for the group to pay a different amount to avoid a nuclear
catastrophe with possible complete extinction of the group than to
avoid auto accidents, which have a much smaller effect on the group,

even though from an egoistic individual point of view the risks

are equivalent. The need, then, is to account both for individual
preferences with regard to life-saving activities and for inter-
dependencies which are important to society.
In this paper, an approach to evaluating life-saving activi-
ties is suggested which corrects for these inadequacies. We first
review work on individual willingness-to-pay (WTP) for reductions
in the probability of death, and extend it to a consistent frame-
work for collective WTP decisions. The development throughout is
based on von Neumann-Morgenstern utility functions for the wealth
and life-death consequences of program options. Using a surrogate
utility function for collective decisions, a notion of collective
risk aversion is developed to account for the risk interdependen-
cies mentioned above.
Section 2 presents a case problem to provide a context for the
discussion. An analysis from the individual point of view, based
largely on previous models, is contained in Section 3. In Section
4, an analysis based on a collective point of view is presented.
Section 5 comments on the usefulness of the approach and further
research on the problem.
The analysis reported here admittedly reduces a very com-
plicated and controversial problem to perhaps stark and oversim-
plified terms. It should not be thought of as final, but rather
as an attempt to formulate the basic elements of the problem as
a basis for discussion. Certainly, any reasoned judgements about
the value of life suffer from ignorance about its alternative.
Nonetheless, judgements and decisions must be made, and the analy-
sis which follows provides a valuable framework in which to think
about and discuss the problem.

2. A Case Problem

A group of 100 miners is considering two safety programs des-

cribed below. These miners are identical in their wealth, current
wages, family situation and mortality risk. Each program is finan-
ced by an equal deduction from the wages of each miner over the
period in which it is in effect.
Program E: A monitoring system is operated in the tunnels to

detect unsafe levels of explosive gases, thereby reducing by 1/100

the probability of an explosion in the next time period that would
kill all 100 miners.
Program F: Safety barriers are constructed in the mine shaft
to prevent miners from independently falling down the mine shaft.
The probability of death by such a fall in the next time period is
reduced 1/100 by this program.
It is assumed that the miners are indifferent with regard to
the way in which an accidental death may occur (whether by falling,
explosion, or some other way). Note that each program saves an
expected number of lives equal to one. However, the explosion has
characteristics of a low-probability catastrophe in comparison to
the risk of falling.
How much should the miners be willing to pay for these two

3. Analysis from the Individual Point of View

Acton [1], Jones-Lee [5], and Raiffa [10] have all formulated
the WTP decision for an individual's purchase of a decrease in
the probability of death. While the formulations differ in de-
tails, in each, the individual faces the decision tree in Figure 1.

Status wealth

dead w*

alive w*

dead w*-V

alive w*-V

Figure 1. An Individual's Willingness-to-Pay Decision Tree

If the individual chooses the upper branch and does not buy

the life-saving program his probability of death is p and his

wealth is w*, whether he lives or dies. If he takes the lower
branch and buys the life-saving program, his probability of death
decreases by an amount d, and his wealth decreases by V. To find
the maximum amount of wealth the individual would pay for the pro-
gram, determine the V which makes the expected utility of the two
decision branches equal,
p u(dead,w*) + (l-p)u(alive,w*)
(p-d)u(dead,w*-V) + (l-p+d)u(alive,w*-V). (1)
Here u(.,.) is the individual's von Neumann-Morgenstern utility
function for the attributes health status and wealth.
The life-death lottery may be decided in an instant of time
or the individual nlay be at risk during a period of time with his
condition at the end of the period determined by the lottery.
Hence, the individual effectively purchases an increase in his ex-
pected lifetime.
Wealth includes all assets, which may be used for consumption
if the individual lives or as a legacy to heirs if he dies. If the
lottery applies to the individual's condition at the end of a per-
iod, any income (after taxes) that may be earned in the period is
included in w* and V is paid out evenly over the time period. In
valuing the legacy, w* is reduced by inheritance taxes and supple-
mented by the amount of life insurance in determining the wealth
actually transferred to heirs.
In this formulation, V is obtainable if the utility functions
u(alive,w) and u(dead,w) are known for values of w, the level of
wealth, in the range wO~w~*,where wO is the lower bound for wealth
(which may be arbitrarily set, for example, at zero or at the
lowest wealth which will sustain life during the period). Assess-
ing these utility functions would require responses to very dif-
ficult assessment questions, no matter how it is done. Leaving
aside for the moment any discussion of the very difficult task of
assessing these utility functions, let us consider some assump-
tions that might apply to them, and the corresponding implications
on V.
To simplify the discussion, the following notation will be
L(w) = u(alive,w) L' aL (w*-V)
D(w) u(dead,w) D' aD (w*-V)

The utility function u(.,.) is arbitrarily scaled so that the

worst possible consequences have utility zero and the best pos-
sible have utility one. Hence,
O(w o ) = 0, L(w*) = 1.
Making further notational simplifications,
0* = O(w*), LO = L(w o ),
Jones-Lee [5] investigated the relationship between V and d,
the change in the probability of death and between V and p, the
initial probability of death. Based on equation (1), he showed
the following, which is presented here in slightly different
form from his.

Result 1. (Jones-Lee) If
(a) L(w) > o(w) for all w
and (b) L'< 0'< 0,
(i) ~V> 0
and (ii) a[~~J d=O >0
d P

Condition (a) is that an individual prefers life at a given

level of wealth, rather than death with that level of wealth as
a legacy, a reasonable assumption. Condition (b) means he
places a positive value on an increase in wealth (or negative
value on a decrease in wealth ) whether he will be alive or dead
and that the marginal value of an increase in wealth is higher if
he will be alive than if he will be dead. Since one has the op-
tion of giving some portion of his wealth to his prospective
heirs while he is still alive (probably avoiding some inheritance
tax in doing so), or of consuming it, it is expected that wealth
is more valuable when alive than dead and therefore that this
condition holds.
The result (i) is that WTP increases with d, which is ex-
pected. The result (ii) means that the marginal value of a de-
crease in risk increases with original risk. Hence an infinites-
mal reduction in risk is more valuable to a given individual if
he is employed as an auto racer than if he is employed as a

telephone operator. This is a similar result to that of Raiffa

[10] where, in a form of Russian roulette in which there are
iE{1,2, ••• ,6} bullets in a revolver with a capacity for 6 bullets,
the amount the player would pay to remove one bullet increases
with i.
It would require, in general, a search routine to solve for
V with given d and p in (1). For the special case when both
L(w) and D(w) are linear, an expression for V can be derived. The
linearity assumption implies that the individual is risk neutral
for wealth, regardless of whether he lives or dies. If the
changes in wealth are small, this may not be a bad assumption
but, of course, it would not apply in general.
substituting linear utility functions L(w) and D(w) in equa-
tion (1) and rearranging gives V directly

V = (w*_w o )[ d(l-D*) ] (2)

(p-d)D* + (l-p+d) (l-L o )

For this simplified linear utility function, it is only necessary

to obtain D* and LO from the individual and p and d for the life-
saving program to determine V. In order to meet the conditions of
Result 1, it is required that
(a) L0 > 0, D* < 1
(b) D* < l-L o
The bracketed portion of equation (2) gives the fraction of
wealth an individual would pay for the life-saving program. It is
not linear in either p or d.
Care must be taken not to draw unwarranted implications from
the nonlinearity of V as a function of d. For example, suppose an
individual suddenly finds that his risk of death has increased.
Shoul~ the amount he would pay to return to his previous risk be

a nonlinear function of the change in his risk? To be more con-

crete, consider the following situation. Individuals A and B
have the same preferences for health status and wealth and they
are both risk neutral for wealth, whether they live or die. Each
faces a probability of dying in the next period equal to .01. In-
dividual A finds that he has developed a disease that increases
his probability of death to.02 unless he is cured. Individual B
is trapped in an avalanche and will die for certain if not

rescued. Should Individual B pay more than 99 times as much to

be rescued as Individual A should pay to be cured?
For both individuals, the final probability, p-d, is the
same, but for A, p=.02, d=.Ol and for B, p=l, d=.99. Holding
p-d constant and differentiating through (2) with respect to p

ap =
(w*-w ) (l-D*)
(p-d) D* + (l-p+d) (l-L o )

This expression is constant for constant p-d, hence the WTP

of individual B should be exactly 99 times that of individual A.
For the Russian roulette example discussed by Raiffa [101, an
individual with this linear utility would pay exactly five times
as much to remove all five bullets from a revolver containing
five bullets as he would to remove one bullet from a revolver con-
taining only one bullet. The interpretation of this result in
the light of Result l(ii) is that if incremental changes are made
in the probability of death, they are best made for those of
high risk, while the value of reducing the probability to a
given level is linear in the amount of the change.
Consider now the analysis of the case example from the point
of view of an individual miner. The decision tree faced by the
miner is identical for program E and F, so V is the same for
each program. From the above, however, the actual value of V in-
creases with the initial probability of death, p.
The individualistic analysis of this section demonstrates in
part the inadequacy of comparing life-saving programs simply on
the basis of cost per expected life saved. However, life-saving
programs are usually instituted by groups of people and the anal-
ysis must be extended to the collective level.
Although the discussion in this section has dealt only with
life-saving programs, the analysis may be adapted to include pro-
grams which reduce the probability of injury or sickness. If a
finite number of health states are allowed in the utility func-
tion u(s,w), the results could be extended to WTP for programs
which reduce the risk of various undesirable health states.

4. Analysis From A Collective Point of View

The collective choice problem will be approached as a problem

in the aggregation of multidimensional consequences. Viewed in
this way, it is like the individual decision we have just exam-
ined, where health status consequences and wealth consequences
were aggregated using a von Neumann-Morgenstern utility function.
Here the consequences of a decision are characterized by a vector
of numbers x = (sl,w l ,s2,w 2 , ••• ,sN'wN) where xi = (si'w i ) are the
health status and wealth consequences to individual i. A von
Neumann-Morgenstern utility function U(sl,w l ,s2,w 2 , ••• ,sN'wN) ag-
gregates the collective consequences into a scalar measure of
desirability. It is assumed, then, that the group will choose the
alternative available to them which maximizes the expected value of
U. Since the function U(·) does not represent the preference of
any single individual, we will refer to it as a surrogate utility
function (SUF).
It is useful to allow an intermediate stage of aggregation
wherein the group members aggregate the consequences to them-
selves using personal utility functions u i (xi)' i-l,2, •.• ,N,
prior to the aggregation of personal utilities [6]. Then the
SUF has the form of a social welfare function

U (3)

Use of this form necessitates interpersonal comparisons of

utility levels. Since the definitions of zero utility and the
unit of utility are arbitrary, it has been argued that such com-
parisons are meaningless (see [13] for a digest of those argu-
ments). How these interpersonal comparisons might be meaning-
fully made is an old, largely unresolved, problem [13]. This
problem will be avoided in our discussion, since for our purposes
in this first-cut analysis, it is sufficient to assume that all
individuals in the gorup have the same preferences, and hence the
same utility function. More concretely, we assume that u i is the
same function for i=1,2, ••• ,N, and we will scale the ui's iden-
tically by choosing two consequence levels, x~ and xi, and assign-
ing u.(x o ) = 0, u.(x*) = 1, i=1,2, ••• ,N. This does not imply
3. i 3. i
that the utility level will be the same for all i for any

alternative, since, in general, their consequences will differ.

Our basis for comparison of utilities is, then, that if two in-
dividuals have identical consequences their utility levels will be
Properties of the SUF are discussed in [21. By applying in-
dependence properties of multiattribute utility functions [7], the
function f in (3) takes on simplified forms. Table 1 gives
three independence assumptions on collective choice and the addi-
tive, multiplicative, and multilinear forms of the SUF which fol-
low from these assumptions.
Assumption 1 is a statement about the separability of deci-
sions over lotteries affecting one individual from the impacts
on all others. In other words, if the consequences to all but
one individual are constant, it is not necessary to even know
what happens to everyone else in order to make decisions for the
individual whose consequences do vary. This assumption implies
that U has the multilinear form, which is a weighted sum of the
products of the utilities of all subgroups of individuals.
The second assumption is a statement about the separability
of joint decisions affecting the pair i and j from the impacts
on all others. In other words, if the consequences to all but i
and j are constant, the tradeoffs between the consequences to i
and j can be made without even knowing what happens to everyone
else. This is a second-order separability condition, whereas
Assumption 1 is a first-order condition. This assumption in con-
junction with Assumption 1 implies U has the mUltiplicative form.
If the mUltiplicative form is expanded, it will be found that it
is a special case of the multilinear form with Ntl constants
rather than 2N-l (e.g., k ij in the multilinear form is replaced by
kk.k. in the multiplicative.) In the multiplicative form, k. is a
]. J ].
relative weighting of the importance of individual i's utility and
k weights the importance of interactions between individual util-
ities, as we shall discuss later in the paper.
Both Assumptions 1 and 2 seem appropriate in the context of
collective life-saving decisions. Without Assumption 1, the SUF
could not be expressed in the form of (3) asa function of individ-
ual unconditional utilities. It is a minimal condition for con-
structing a social welfare function.
Assumption 2 is somewhat more restrictive in that it decompos-
es the problem of making tradeoffs among all individuals to the

Table 1


Assumption 1. (First-Order Mutual Utility Independence)

Collective choices among lotteries involving only chan-
ges in the consequences to individual i do not depend on the con-
stant consequences to all other individuals, for all i.

Assumption 2. (Second-Order Mutual Preferential Independence)

Collective choices among alternatives involving only
changes in the level of consequences to individua~i and j do not
depend on the constant consequences to all other individuals, for
all pairs i,j.

Assumption 3. (Additive Independence)

Collective choices among lotteries depend only on the
marginal probability distributions of consequences to each in-
dividual and not on their joint probability distribution.

Resultant Forms

1. Assumption 1 implies that U has the multilinear form

U(X) '" L ki u i (Xi) + L k ij u i (xi)u j (x j )+ •••

i i

2. Assumptions 1 and 2 together imply that U has the multipli-

cative form
kU(x) + 1 '" 1T• [kk.u.(x.) + 11, k~O
1. 1. 1.

3. Assumption 3 implies that U has the additive form


problem of making tradeoff comparisons for all pairs of individuals.

It would be rejected if it seemed that tradeoffs among the conse-
quences to individuals in subgroups containing 3 or more individuals
might not conform to pairwise tradeoffs between the consequences
to any two individuals in the subgroup. At this point, there is
no compelling reason for rejecting either of these assumptions.
Assumption 3 is much less innocuous. It implies that the
group should be indifferent between the lotteries illustrated in
Figure 2.

Outcome to Individuals
i j
s. w. s. w.
~ ~ ....1 ....1
alive w alive w


dead w dead w

alive w dead w

dead w alive w

Figure 2. Lotteries with Differing Collective Risk for

Individuals i and j.

In each of these lotteries, everyone but individuals i and j have

the same certain consequences. Focusing, then, on i and j, note
that in Ll they both live or they both die, depending on the out-
come of the lottery and in L2 one lives and one dies in each
branch of the lottery. The wealth is the same for i and j in both
lotteries and known wtth certainty. In each lottery the expected
number of lives lost is one. Using an additive SUF, the expected
utility of the two lotteries is equivalent.

From the totally egotistic individual point of view, Ll and

L2 are equally desirable, since they have the same expected utility
to each individual. using the analysis of the previous section, i
and j would have the same WTP to avoid either lottery. However,
from a collective point of view, the lotteries may not be equally
desirable. In L2 , at least one of the pair (i,j) remain alive on
each branch of the lottery, while in Ll , it is possible that both
die. Hence there is the added risk in Ll of losing the pair
(i,j). It may be perfectly reasonable, then, for the group to
prefer L2 to Ll •
On the other hand, there may be good reasons for the group to
prefer Ll to L2 • Suppose i and j are a childless couple, for exam-
ple. It may make sense for them to face the lottery together
rather than have the situation where one remains alive without the
other. In any case, it is unlikely that the lotteries are equally
preferred, and hence we expect that Assumption 3 is unacceptable
and therefore that the additive SUF is not appropriate.
A preference of L2 over Ll constitutes a kind of collective
aversion to risk, a special case of the multivariate risk aver-
sion of Richard [11). The collective risk may be thought of as
a risk of partial or full extinction of the group in the same way
that an individual risk is of personal extinction. In L2 , only
one of the group will die, while in Ll , there is a 50-50 gamble
of losing a pair of individuals. The notion is easily extended to
a risk of losing any number of group members.

Collective Risk Aversion: A group of N people exhibit col-

lective risk aversion if they prefer situation A to situation
B below for any probability p and any wealth level, w.

A. The consequences to the individuals are determined by N

statistically independent, two-pronged lotteries LA having
a chance p of a good outcome and a chance I-p of a bad outcome
for the individual.

life, w

death, w

B. The group faces the lottery LB with a probability p of ob-

taining ag~od collective outcome.
all alive, each with wealth w

all dead, each with legacy w

If A and B are equally desirable to the group for all p and

w, we could describe the group as collectively risk neutral. Or
if B is preferred to A for all p and w, then the group exhibits
what we might call a collective risk seeking behavior.
It can be shown that any additive SUP exhibits collective
risk neutrality. The multiplicative SUP exhibits collective risk
aversion when k < 0 and the extent of collective risk aversion
decreases with k (-l<k<~). A sufficient, but not necessary con-
dition for collective risk aversion in the multilinear SUP is
that all of the constants (k's) are negative. In general, the mul-
tilinear SUP exhibits collective risk aversion if the second par-
tial derivatives, a u/auiau j , for i,j = 1,2, ••• ,N, i#j are nega-

tive for possible values of u i and u j •

Collective risk aversion is a preference for a mixture of
good and bad outcomes for the members of the group to "all or
nothing" pro{>ositions. Alternatively it may be thought of as an
aversion to risk in the number of people who die in a single event.
Should we expect a group to exhibit collective risk aversion?
Wilson [14] argues " ••• a risk involving n people simultaneously
is n 2 (not n) times as important as an accident involving one per-
son. Thus a bus or aero plane accident involving 100 people is
as serious as 10,000, not merely 100, automobile accidents killing
one person." While others are less emphatic about the particular
relationship between the level of hazard and the number of people
implicated in a particular fatal event, the sentiment expressed by
Wilson is common. Witness the much more strigent standards placed
on larger aircraft like Boeing 747's than on the smaller aircraft
like Boeing 737's. Wilson [14] does some simple calculations to
show that recent safety requirements on nuclear reactors cost
$750 million per life saved where lives are lost approximately

1000 at a time, and that we spend in this country about $80,000

per life saved for automobile seat belts, where lives are saved
one or a few at a time. The fact that this has occurred in our
public spending does not necessarily justify collective risk aver-
sion. However, collective risk aversion is a matter of group
judgement, and it seems apparent that when the group is society as
a whole, the property has been operative in the past.
Consider now VE and VF , the collective WTP for programs E and
F of the case problem. Clearly program F eliminates a risk of the
form of situation A and program E eliminates a risk of the form of
situation B. When collective risk aversion applies, then, the
group considers the explosion lottery less desirable than the fall
lottery, and would pay more to eliminate such a risk, i.e.

It follows that the inequality is reversed if the group ex-

hibits collective risk seeking behavior, and becomes an equality
under collective risk neutrality. The difference VE - VF may be
thought of as a kind of collective risk premium. It represents the
excess cost each individual incurs to avoid the risk where out-
comes are positively correlated.
We can find the exact values of VE or VF in a way similar to
that for the individual analysis. Figure 3 shows the decision tree

Individual Surrogate
Utilities Utility

1 1

D* D* {(l+kcD*)N_ 1J / k

Figure 3. Decision Tree for Program E


for program E, assuming the SUF has the multiplicative form scaled
so that surrogate utility is zero when all individual utilities are
zero and one when all individual utilities are one. Symmetry in
the importance weighting of individual utilities is assumed here
by replacing k i with a constant c, i=I,2, ••. ,N. Equating expected
surrogate utility for the two decision branches we obtain

(1+kl (l-pl + p (1+kcD*l N = (l-p+dl (l+kcLE' N + (p-dl (1+kcD E' N

where LE = L(w* - VE" DE = Diw* - VE'. A simple search algorithm

is necessary to solve for VE • The equation for finding VF is
somewhat more complicated, but no different in principle.
How does the collective WTP compare to the individual WTP
found in the previous section? One result of that analysis was
that the individual WTP was the same for the two programs, i.e.
Ve Vf , where the lower case subscripts indicate the analysis is
from the individual point of view. Since the lotteries in Program
F are statistically independent, it seems natural that the in-
dividual and collective WTP be the same. This can be easily
demonstrated for the multilinear SUF, regardless of its collective
risk properties, as follows.
The expected utility of the multilinear SUF may be written

where E is the expectation operator.

When the individual risks are independent, this becomes

RIu] = L:~=lki E[U i )+ L: N k .. E[u.)E[U J.]+ •••+k I2 ~Iul]EIu2] ••• E[U J.

j>i i=I~J 1. ••• N

Each of the individual lotteries and utility functions are iden-

tical, hence E[u.]=E[u.],i,j = 1,2, ••• N. Then maximizing E[U) is
~ J
equivalent to maximizing E[U i ] for any i=I,2, .•. ,N. Hence the
WTP using E[U i ) is the same as using E[U) and

One implication of this and the previous results is that

VE > Ve when collective risk aversion is exhibited. The

difference again constitutes a kind of collective risk premium, or

the amount the individuals pay to avoid the collective risk, over
and above any risk premium to avoid individual risk.

5. Conclusion

In this paper, the problem of comparing life-saving activities

has been discussed. Utilizing previous work on the problem, it is
first analyzed from an individual point of view, using utility
functions for wealth when alive and legacy wealth. Individual
utilities are then aggregated for group decision making by a sur-
rogate utility function. The multilinear and multiplicative forms
for the SUF which allow for collective risk aversion seem rea-
sonable, while the additive form does not. A simple case exam-
ple was analyzed to demonstrate that comparing life-saving act-
ivities on the basis of least cost per expected life saved is
It was observed in the analysis from the individual point
of view that willingness to pay for a reduction in the probabil-
ity of death depends on the level of initial risk. This effect
will, of course, carryover in the analysis from the collective
point of view. Relationships between WTP and initial risk and
between WTP and the change in risk were investigated.
When the surrogate utility function exhibits collective risk
aversion, programs to reduce risks where the consequences to in-
dividuals are positively correlated (such as nuclear accidents)
are worth more than programs to reduce independent risks (like
auto accidents). The extent of collective risk aversion deter-
mined the difference between WTP for these two kinds of programs.
Many assumptions were made in our analysis which may be
relaxed in future work in this area. It was assumed, for example,
that each individual had the same utility function for health
status and wealth, hardly a realistic assumption. The problem
becomes much more complex, however, when individual utility func-
tions differ.
Nothing was said about how wealth is distributed to the
other members of the group when a person dies. Differing dis-
tribution policies may affect both individual and collective WTP.
The programs we compared were much simpler than those that

are actually available. Comparing programs which affect individ-

uals in different ways, and where the individuals have differing
initial risks, could be investigated in future work.
The development of a complete decision methodology based on
the ideas presented here would require careful consideration of
how to obtain the needed utility functions.
Many questions about the comparison of joint life-saving
activities remain. Nonetheless, the approach taken appears to
be a fruitful one, especially in relation to current methods and


1. Acton, Jan Paul, "Evaluating Public Programs to Save Lives:

The Case of Heart Attacks," Rand Corporation Report R-950-RC,
Santa Monica, California, January, 1973.

2. Bodily, Samuel E., Collective Choice with Multidimensional

Consequences, Technical Report #127, Operations Research Cen-
ter, M.I.T., July, 1976.

3. Ferreira, Joseph Jr., and Slesin, Louis, "Ovservations on the

Social Impact of Large Accidents," Technical Report #122,
Operations Research Center, M.I.T., October 1976.

4. Fishburn, Peter c., Utility Theory for Decision Making, Wiley,

New York, 1970.

5. Jones-Lee, Michael, "The Value of Changes in the Probability

of Death or Injury," Journal of Political Economy, Vol. 82,
No.4, (1974), pp. 835-849.

6. Keeney, Ralph L., and Kirkwood, Craig W., "Group Decision

Making with Cardinal Social Welfare Functions," Management
Science , Vol. 22, No.4, (December 1975), pp. 430-437.

7. Keeney, Ralph L., and Raiffa, Howard, Decision Analysis with

Multiple Objectives, Wiley, New York, 1976.

8. Linnerooth, Joanne,"A critique of Recent Modeling Efforts

to Determine The Value of Life", Research Memorandum,
RM-75-67, International Institute of Applied Systems Analy-
sis, Vienna, December 1975.

9. Meyer, Richard F., "Some Notes on Discrete Multivariate

Utility," Harvard Business School, Mimeographed Manuscript
March, 1972.

10. Raiffa, Howard, "Preferences for Multi-Attributed Alterna-

tives," Rand Corporation Memorandum, RM-5868-DOT/RC, Santa
Monica, California, 1969.

11. Richard, Scott F., "Multivariate Risk Aversion, utility In-

dependence and Separable Utility Functions," Management
Science Vol. 22, No. 1 (September, 1975), pp. 12-21.

12. Schelling, T.C., "The Life You Save May be Your Own," in
Problems of Public Expenditure, edited by S.B. Chase, Brook-
ings Institution, Washington, 1968.

13. Sen, Amartya, Collective Choice and Social Welfare, Holden

Day, San Francisco, 1970.

14. Wilson, Richard, "The Costs of Safety," New Scientist,

Vol. 68, (30 October, 1975), pp. 274-275·

Vira Chankong
Mekong Secretariat/Planning Unit
National Energy Administration
Bangkok 5,Thailand

Yacov Y. Haimes
Professor of Systems Engineering
and Civil Engineering


This paper presents an interactive and modified version of the

Surrogate Worth Trade-off method for multiobjective decision-making
which was developed by Haimes and Hall [1974]. An attempt is made to
develop an algorithm that is both theoretically appealing and yet
intuitively simple to understand and implement particularly from the
point of view of the decision-maker (DM). The method recognizes and
emphasizes the importance of both the structural part, by insisting
that only the Pareto optimal solutions need be considered as candi-
dates for the final decision, and the nonstructural part, by pro-
viding a simple and effective procedure by which the DM and the ana-
lyst can interactively and systematically explore the Pareto optimal
set while trying to maximize the DM's unknown utility function. This
results in a sequence of improving Pareto-optimal solutions which,

under some rational and consistent choices on the part of the DM,
converges to a solution having maximum DM utility. The DM-analyst
dialogue as well as the tasks to be performed by both are kept simple.
The method applies to both linear and nonlinear multiobjective de-
cision-making problems. Other features in which the method has its
strengths and weaknesses are discussed and demonstrated by means of
a numerical example.

There is a proliferation of existing techniques for solving a
multiobjective decision-making problem (MD~l). The emphasis and style
of each of these techniques depend largely on the field of expertise
of its developer(s). Social scientists and economists, for example,
tend to look at an MDM problem from the 'human factor' viewpoint and
concentrate mainly on the role of the decision-maker (DM). Here all
major activities evolve around the DM's subjective value judgments
and the task of translating those judgments into some form of prefer-
ence function (e.g., multiattribute utility function (see for example
[Fishburn, 1973]) or indifference function (for example [MacCrimmon
and Wehrung, 1975]), etc.). On the other hand, those who are well
equipped with mathematical tools and more familiar with 'mechanical'
systems tend to look at an MDM problem from the 'structural' view-
point and try to attack it with mathematical rigor. While both may
have their own merits and may be suitable for some types of problems,
the former group is criticized as being ignorant of (physical)
details (of the problem) whereas the latter group is sometimes criti-
cized for being too involved with only the physical aspects by

choosing to ignore some potentially significant elements of the

problem, Such elements are ignored (by the latter group) not because
their impacts are considered negligible but mainly because they tend
to defy quantification and hence rigorous mathematical treatment.
Since 1970, a number of multiobjective decision-makingmethodolo-
gies based on some kind of the 'DM-Analyst' interaction has emerged.
The general common framework underlying each of these techniques
can be identified with the following three-step procedure:
Step 1) The analyst generates noninferior (also known as
Pareto optimal or efficient or nondominated)
solutions based on a mathematical model representing
the structure of the system.
Step 2) Along with each noninferior solution, the analyst
obtains all necessary and meaningful information with
which to interact with the DM.
Step 3) The DM assesses his preference based on this informa-
tion. And based on the DM's preference assessment,
the preferred (final) solution is then chosen from the
noninferior set.
But this is as far as the similarity goes. The methods differ
from one another greatly in the way in which each of the above steps
is treated and emphasized. The STEM method for linear MDP problems
(Benayoun et al [1971]), for example, employs a modified version of
the weighted norm problem (using ~l-norm) as a means of generating
noninferior solutions in step 1. It uses the payoff matrix as the
form of information to interact with the DM in step 2. And finally,
the DM's subjective value judgment is used to change the feasible
region from iteration to iteration in step 3. The method is inter-
active in the sense that steps 1 through 3 are performed consecu-

tively in each iteration. Various other interactive methods for

linear MDM problems with similar frameworks have also been proposed
by (to name only a few) Savir [1966], Maier-Rothe Stankard [1970],
Belenson and Kapur [1973], Zionts and Wallenius [1975], and Thiriez
and Zionts [1976]). For nonlinear MDM problems, Geoffrion [1970] and
Monarchi et al [1973] proposed methods that put special emphasis on
steps 2 and 3. In both methods, the DM is simply supplied with
nothing more than the current values of the objective functions to
which the DM responds by providing appropriate subjective indiffer-
ence trade-off values (or marginal rate of substitution) in the
former or by providing appropriate intervals of aspiration goals in
the latter. This information is then used to modify the objective
function for generating a new point in step 1 of the next iteration.
Unfortunately, neither method guarantees that the generated solution
in each iteration (as well as the final solution) will be Pareto
optimal. Other drawbacks for both methods also exist. For example,
for Geoffrion1s method, it has consistently been mentioned (see, for
example, Wallenius [1975]) that the estimation of subjective in-
difference trade-off values by the DM (required in each iteration)
is, in practice, very difficult to accomplish. For a more detailed
review see Chankong [1977]. Haimes, and Hall [1974] and Haimes et al
[1975] took a different route (but still with the same three-step
structure) in developing the Surrogate Worth Trade-off (SWT) method.
In this method, the constraint problem is used as a means of genera-
ting noninferior solutions. Objective trade-offs, whose values can
be easily obtained from the values of some strictly positive Kuhn-
Tucker multipliers from step 1, are used as the information carrier
in step 2. And in step 3, the DM responds by expressing his degree
of preference over the prescribed trade-offs by assigning numerical

values (on an ordinal scale of (say) -10 and +10) to each variable
Wkj (the surrogate worth as it is called by Haimes and Hall).
Several favorable features of the SWT method can be identified. Some
of these are: i) the use of the constraint approach in step 1
guarantees that all noninferior solutions can, in principle, always
be generated even for nonconvex problems, ii) the proposed objec-
tive trade-offs used in step 2 are easy to obtain, informative, com-
prehensible and easy for the DM to work with, and iii) it is generally
easier to assess preference on the already determined trade-offs than
to estimate numericaly trade-offs to satisfy some present criteria as
required by some other methods. The SWT method was extended by Hall
and Haimes 11976] to handle mUltiple DMs.
There is, however, room for some improvement, Rarticularly ihthe
way the information from the DM is utilized. In its original noninter-
active version, after all Wkj are obtained from the DM, curve fitting
or multiple regression analysis is done to try to relate Wkj to trade-
offs and the current levels of objectives. After this is done, a
system of equations corresponding to Wkj C·) = 0 for all j f k is
solved to find a point in the indifference band. In going through
these mechanical steps, we may be performing illegitimate mathematical
operations in the sense that the empirical information contained in
Wkj may be destroyed. Moreover, several noninferior points generally
needed to be generated to provide sufficient data for the above steps.
In this paper, an interactive version of the SWT method is pro-
posed. The underlying philosophy, which is by no means new in this
rapidly growing research area, has been to develop a procedure by
which the DM and the analyst can work in close harmony during the
complex MDM process -- a procedure that allows the complex details
of the internal (quantifiable) structure of the MDM problem to be

effectively exploited while at the same time provides an effective

mechanism for treating other important Isubjective'elemen~s as well.
Apart from overcoming the setbacks of the original version of the SWT
method, this interactive version is computationally more efficient
and theoretically more attractive. An added appeal of this version
is that it provides the DM with an environment and opportunity to be-
come more knowledgeable about the system's behavior, thereby helping
him make a subsequent preference assessment with greater accuracy
and consistency as the number of iterations goes on. The method is
applicable to linear as well as nonlinear MDM problems.


First we state the multiobjective decision-making problem we
wish to solve.
Problem 1: Given a multiobjective optimization problem MOP,
where MOP is
min (fl(x), ... ,fn(x)), X = {xlx€EN, g.(x) , 0, i=l, ... ,m}
xeX 1

fj:EN---+ R for all j =~l, ••• ,n a~d &i:EN---+ R for all

i = 1, .•. ,m, find x* E X*, where X* is the set of noninferibr
solutions of MOP, such that x* solves max U(f 1 , ••• ,f ), where
xeX* n
U(·) is a utility function defined on F = {f(x) IxeX} and is
assumed to exist and is known only implicitly to the DM.
However, since most scalar optimization techniques can only
guarantee local solutions, we shall be content, as a solution to
Problem 1, with a local noninferior solution of the MOP. We, there-
fore, modify Problem 1 to read as follows:
Problem 2: Given an MOP, find x*e:X*, the set of local non~

inferior solutions of MOP, such that x* solves max U(f 1 , ••• ,f ).

xeX* n
Throughout this paper the following assumptions will always

Assumption 1: U:F~ R exists and is known only implicitly to
the DM. Moreover, it is assumed to be a con-
tinuously differentiable and monotone non-in-
creasing function on F.
Assumption 2: All fj,j = l, ... ,n and all gi = l, ... ,m are
twice continuously differentiable in their re-
spective domains.
Assumption 3: X is compact (for every feasible Pk(E), as de-
fined below, its solution exists and is finite).
The ISWT method, like the SWT method, is designed to solve
Problem 2. Again, the E- constraint problem Pk(E), where Pk(E) is
min{fk (x) I fj (x) ~ Ej for all j t- k and XEX}, is utilized as a means
of obtaining local noninferior solutions of the MOP.
The proposed method follows all the steps of the SWT method up
to the point where all the surrogate worth values (W kj for all j f k)
corresponding to a local noninferior solution are obtained from the
IM. Careful study reveals that there is a close relationship between
Wkj and the directional derivative of the utility function evaluated
at the current local noninferior solution and in the direction de-
fined in terms of the associated Kuhn-Tucker multiplier. In light of
this relation, a ''\I odel-DM" interactive on-line scheme can be con-
structed in such a way that the values of all Wkj are used to deter-
mine the direction (in a reduced objective space) in which the utili-
ty function, although unknown (and its true form is never required in
the computation), increases most rapidly. In this way we can con-
struct a sequency of locally noninferior points, each one is an im-
provement (i.e., utility value increases) over the previous one,
thereby converging to an unconstrained optimum (i.e. ends up in the

indifference band, Wkj = 0 for all j r k), or a constrained optimum

(i.e., the next possible improved point will be infeasible). A
complete detailed development of the ISWT method can be found in
Chankong [1977].
The following is an outline of the structure of the ISWT method.
Figure 1 shows this structure in a flow diagram.






W~j FOR ALL j ¢ k

Figure 1 The Outlining Structure of the ISWT Method


SteE 0 (Initiali za tion) Select fk as a primary objective.

Guess an initial £0
SteE 1 (Local noninferior solution) With the current £ ,
formulate Pk(£i) and solve for (strict) local solution
x. Obtain all necessary trade-off information at this
SteE 2 ("Worth" assessments) Exchange information obtained
from Step 1 with the D4 and, in return, obtain the
worth values Wkj for all j f k from the IM.
Step 3 (Termination) Check stopping criteria and if not
satisfied, proceed.
Step 4 (Update) Use wt j to update Ei .... £i+l and return to
Step 1.
Observe the part played by the IM (Step 2) in dictating the
"best" direction for the search away from the current trial point.
Necessary theoretical machinery required for Step 1 of the ISWT
method can be found in Chankong and Haimes [1976] and Haimes and
Chankong [1977]. To illustrate the basic idea underlying the above
steps, we shall describe the salient features of the ISWT method by
means of a simple problem with three objective functions: flCx),
fZ(x) and f 3 (x). The set of feasible alternatives X is, for sim-
plicity, assumed to be EZ. To generate noninferior solutions we
shall use the constraint problem PlC£) defined as max flex)
subject to fz(x) to; £z •• Q ••••• (a)

f 3 (x)
£3 to; ........
(S) •
a > a and a
For a selected £0, let x O solve Pl(E O) with \z a
Ai 3 >
being the corresponding optimal Kuhn-Tucker multipliers of (a) and
(S), respectively. Assuming that the regularity and second-order
sufficiency conditions are satisfied at xC, x O is then a local non-

inferior solution Theorem. Also

~A1Z and
° represent the partial

trade-off rates at xO. This allows us to interact with the EM by

Question 1: '~iven that f.1 = f.1 (xO), i = 1,Z,3 how (much) would
you like to decrease f1 by A~z units per one unit
increase in fz while f3 remains unchanged?"
Suppose the EM responds by assigning W1Z
° +8, indicating that he
particularly likes that trade.. This means that i f we can come up
with a new point with fZ = fZ(xO) + ofZ (for some small ofZ > 0), f1
= f1 (xO) + A~Z ofZ and f3 = f 3 (xO), this point will be a better point
than xO according to the EM's preference. By the interpretation of

A1Z ' it is intuitively obvious that one way to obtain an approxima-
tion of such a point is to change the R.H.S. of (a) from E~ to E~ +

ofZ' (keeping E3 = E3) ° and re~solve the problem.

Likewise, the EM is asked a similar question which utilizes Al3
and itnerchanges the roles of fZ and f 3 • Suppose again that the rM
responds by assigning W~3 = -4 (indicating that he rather prefers
although not as enthusiastically as previously) to increase f1 and
decrease f3 by the approximate ratio of A~3 (units fl/unit f3) with
£Z unchanged. Thus, if a new point can be found so that at this new
point f3 = f 3 (xO) - of3 for some small of3 > 0, f1 f 1 (xO) + A13of3
and f2 fz(xO), the EM will prefer the new point to xO. Again a
linear approximation of such a point can be obtained by changing the
R.H.S. of (8) from E~ to E~ + of3 (keeping EZ = E~) and re-solving
the problem. Taking the rM.s responses to both questions together,
one would expect that the solution xl of a new problem P1(E 1 ) where
E1Z = °
EZ + ofZ and E31 = °
E3 ~ of3 (for some small ofZ > ° and of3 > 0)

would be a better point than xO according to the EM's preference.

That is indeed true, assuming that the EM's structure of prefer-
ence can be characterized by a monotone nonincreasing utility

function D, can be demonstrated more rigorously, as will be done

shortly. It is interesting to observe that under the specified
assumptions and if of2 and of3 are sufficiently small xl(r xO)
always exists and is unique (hence it is a local noninferior
solution). Equally interesting is the fact that, under the same
assumptions, we are able to independently control the levels of
f2 and f3 and still remain on the locally noninferior surface in
the neighborhood of xO (see Theorem 5 (part a) Haimes and Chankong
[1977] or Chankong [1977]. This allows the D4 to make all necessary
assessments and answer the two questions independently.

fur the case where A12 >

° ° and A13° ° (or vice versa), as
suggested by Theorem 5 (part c) in Haimes and Chankong [1977], if we
make a small change of2 units in f2 then not only fl changes by ap-
proximately -A~2of2 but f3 also changes by (Vf 3 (xO) ~~~EO) f2 units.
FOr this reason, in this case, we go to the D4 and ask the following
Question 2: _ ° _ °
"G i ven that f 1 - f 1 (x ), f 2 - f 2 (x ) and f 3 =

f 3 (x ), how (much) would you like to decrease fl

by \z dE Z
units per one unit increase in £2?"
Again at the end of this questioning period, we will have the
value of W~2 representing the IM preference over the specified trade-
off which can be used in the updating procedure similar to the previ-
ous case.
In the above discussion, we have used the signs of W12 and W13
° °
(or of W12 only in the latter case) as a clue to getting a new point.
The basic question still remains as to how much E2 and E3 should be
changed to obtain the best available new point. The clue again lies
in the relative values of W~2 and W~3 (as well as the scaling factors
for fz and f 3 ). In the following development, we consider an

i i
arbitrary i-th iteration. In the case where W12 and W13 (assume for
o 0
the moment that A12 > 0 and A13 > 0) can be measured in ratio scale
(which may be possible upon imposing some strict conventions similar
to that used by Miller [1970]), an updating scheme similar to those
used in mathematical programming can be developed. Suppose that W12
i are obtained through Question 1. Then Wi2 indicates the de-
and W13
gree of the TIM's preference in some ratio scale between -10 and +10
toward exchanging (decreasing) A12 per one unit increase in f2 with
f3 remains unchanged. With this observation, it can be reasoned
(see Chankong [1977]) that there exists a monotone increasing real-
valued function Y~ with Y~ (0) = 0 such that
i i
U(f l - A12 h 2 •


i aU(fl(xi),f2(xi),f3(xi)) j 1,2,3.
where ( ~¥j) ~ . :H j

Observe that the R.H.S. of (1) is the directional derivative of the

utility function U(·) in the direction (-Aiz,l,O)T which simplifies
to (Z).
Likewise, we can also establish that there exists a monotone in-
creasing real-valued function ~~ with ~~ (0) = 0 such that
. i i
.. ( au \ fa U \ Ai
- \HI)
Y~ (W~3) = "IT"3) 13'

Using (Z), (3), the Sensitivity Theorem (Luenberger [1973], p. Z63)

and Theorem Sa of Haimes and Chankong [1977], it can be shown that,
for each local noninferior x in a neighborhood of x i , there exists
an e in a neighborhood of e i such that 3
~ iii
L y.(W1 ·)6e. (4)
j=Z J J J

where t'iE ji = Eo
J -
Eji , j = 2,3.
fur a detailed proof of (4) see Chankong [1977] • Also from the
i i _
above reference, similar result for the case where AIZ > 0 and A13-0
(or vice versa) can also be established.
Examination of (4) reveals that if we make the sign of OE j the
same as that of wi j j = 2,3, R.H.S. of (4) is always nonnegative in-
dicating a possible increase in U. '4oreover, to find a new local
noninferior solution x i +l having the best increase in U(·) (from J)
is equivalent to finding the right values of OE 2 and OE 3 which maxi-
mizes the R.H.S. of (4). By maximizing the R.H.S. of (4) with re-
spect to OE Z and OE 3 subject to appropriate constraints, the follow-
ing updating scheme (from Ei to Ei +l ) similar to the steepest ascent
method (or more precisely Zoutendijk's feasible direction) can be
developed (see Chankong [1977] for more details).
Case 1 If AiIZ > 0 and A13
i i+l = E~ + a i
> 0, Eo
j I fj (xi) I)
j = 2,3, where a is the step size to be determined.
Case 2

The procedure employed by Geoffrion, Dyer and Jeinberg [1971]

can be used to determine the step size here. By drawing graphs of
fj(a) = f~ + aOE~ j = 1,Z,3 against a in some feasible interval
(O,~) on the same diagram, the IM can be asked to examine the dia-
gram and to estimate the value of a that satisfies him most. Note
i f.
that oE l ::::: - L
i i i
AlJ°t'iEJo for Case 1 and::::: - \ZOE Z for Case 2.
Observe also that ~ is an upper bound of a i within which the feasi-
bility of Pk(E i ) is preserved. A possible way of estimating ~

is suggested in Chankong [1977J.


Using this scheme, one important question remains to be

answered. We know from the way we construct Ei+l from Ei that l"f
- i + lr - i +I i +I "
there exists x "1:.X such that fj (x ) = Ej for all J = Z and 3,
then it is guaranteed that U(fl(xi+I),fz(xi+I),f3(xi+I)) ~
U(f l (x i ) ,fZ (x- i ) ,f 3 (x i )).
However, the next point we actually use is x i +l which solves
i+l -i+l One may ask then
PI(E ) and which mayor may not be exactly x
whether or not we can still say that x i +l is a better solution than
xi for, if not, the whole idea of this interactive scheme which aims
at generating a sequence of improving local noninferior solutions
would be jeopardized. Upon imposing the assumption that U is a
monotone nonincreasing function of f(x) = f l (x),f Z(x),f 3 (x)) and
" i+l -i+l i+l i+l
obserVlng that f(x ) 10 f(x ) (since x solves PI (E ) and
E~+l = fl(x i + l ) for all j f k) the question can be quickly resolved.
" i+l -i+l i+l -i+l -i+l -i+l
That IS u(fl(x ),fZ(x ),f 3 (x )) ~ U(fl(x ),fZ(x ),f 3 (x ))
~ U(fl(xi),fZ(xi),f3(xi)) where the first inequality is provided
by the monotone nonincreasing property of U and the last inequality
is due to our construction of E~+l
Finally there is the question concerning stopping criteria.
The most natural point to stop the process is when the IM is satis-
fied with the current solution. This can happen in two ways:
1) when the IM is absolutely satisfied in the sense that he feels
the current point has the highest utility or 2) when the IM wants
to move away from the current point but all the directions that he
wants to move (i.e. direction of increasing utility) are either in-
feasible or unusable indicating that the current point is the point
of constrained maximum utility. The occurrence of the first possi-
i i
bility will be indicated when Wlj = 0 for all j satisfying Alj > O.

That this is indeed the case is demonstrated in Chankong [1977]. The

second possibili ty (constrained maximum) is more difficult to check.
Possible ways of accomplishing this task are also suggested in
Chankong [1977].


Under the assumption of 'ideal' LM, it is not difficult to show
that the proposed ISWT method is nothing else but a modified
Zoutendijk feasible direction-finding scheme augmented to the SWT
method to solve the following problem: max U( fl (x) , ••• ,f (x))
xe:X * n
where X * is the set of noninferior solutions of the MIM problem.
Thus, the convergence of the interactive part of the ISWT method
follows from the convergence of the modified Zoutendijk's method.
fur a more detailed discussion on the convergence of the ISWT
method see Chankong [1977].

We now demonstrate the mechanics of the ISWT method by means of
an illustrative example which is designed to test the method under
the assumption of an ideal IM (Le. consistent, rational 'With a well
defined structure of preference as represented by a utility
function). This example should also show that, barring the IM's
inconsistency, inaccuracy and irrationality, the method should per-
form well in terms of convergence and other computational efficiency.
A more realistic illustration which demonstrates the learning process
of the IM and the simUlation capability provided by the ISWT method
can be found in Chankong [1977].
Consider the following multiobjective deCision-making problem
(MDM) :

subject to gl(x) xl .. 0
gz (x) - Xz .. 0
where f 1 (x) (xl _3)Z z
+ (x Z -Z) , fZ (x) xl + Xz

and f 3 (x) xl + Zx Z'

For illustrative purposes, we shall assume that the (imaginary)

IM is consistent, rational and will always give a response, in
assessing Wkj , based on a well-defined structure of preference in
the sense that his structure of preference can be accurately repre-
sented by the utility function U(f l ,f Z,f 3) (although no one, not
even the IM, knows it) where

= {I -
f l /30 - fZ/15
if {~ :~:
, 10
, 5
otherwise 0, .. f3 , 10

Observe that U is a monotonic nonincreasing function of
f 1 , fZ and f3 and has [0,1] as its range.
It should be stressed that the explicit form of utility
function as in (12) is used in this example purely for simulating
the rM so that we may illustrate other aspects of the ISWT method.
Specifically it is used to simulate values of W~j (i.e. W~j obtained
this way are as if they had been obtained from the ideal (imaginary)
IM directly).
Let us now choose flex) as our primary objective and formulate
the corresponding E~constraint problem PI(E):

PI(E Z' E3 ): min flex) (xl - 3)2 + (x z - Z)2

subject to fz(x)

" 0
gz (x)
z " O. - X

In applying the ISWT method, PI (E Z' E 3) will be used to gener-

ate noninferior solutions and (1Z) will be used to simulate our
imaginary IM, i
To be more speci f ic, W12 an d Wl3
i at th
el' th iteratIon

will be obtained through the following expressions:

i i
(au) (au)

{ a(~2L 15
• ~:)l
otherwise (13)

Y~(W~3) = ciW~3/lf3(xi) 1=

~oreover, the step size a i , which determines the point of

maximum utility along the direction di , can also be simulated by
computing from
ai/sic i (15 )
Ai (Za iZ a~) i
(xl - 3) + (a 3
- a~) (x iZ - Z) + a iZ + a 3i /Z

Bi (2a i2 - a i)Z
3 + (a 3i - a i)Z

a iZ c iW
iIZi fz(x i ) 1 and a 3i c iW
i i ~3 (x i ) 1 •

Note: 1) Expressions for Ai and Bi are obtained by substitu-

t lng f . = fi.i .J = 2 , 3 , ln
a.., + . to (12) and then solvi~g
max U(o.) analytically for a..

Note: 2) Ai calculated from the above expression can be

shown to always be negative.
Using this imaginary DM, we can now solve the ~DM problem, as
characterized by MOP l , iteratively. Results for each iteration are
displayed in table 1. Figure 2 displays the traj ectory of
improving noninferior solutions while figure 3 exhibits the di-
rection of search in the reduced objective space (Le., "f 2 - f3"
plane) for each of those iterations. To understand how the ISWT
method acutally workd, we give the following description of the
first two iterations.
Iteration O. Choose initial (£~, £~) = (4.375,6) and solve
o 0 0 0
Pl(4.375,6) to get (xl' x 2) = (2.75,1.625), (f 2 , f3) = (4.375,6),
o 0
f 1 = • 203 and A12 = • 25, }.13 = • 25.

From this information, the (imaginary) IM, by giving values of W~2

o ' (as simulated by (13) and (14) determines the direction of
and W13
search in the f 2-f 3 plane to be dO = (0.7786, -0.6276) (calculated
from the vector (a o,
a 3 )) as displayed in figure 4. Also, according
to the IM, we should move along dO (from point (f~, f~) = (4.375,6)
(i.e., point A in figure (3) up to the point (3.6,5.37) (i.e. point
B in figure 2 or point xl in figure 2).
Observe that 1,.0 = .50156 < .5185 = UCx1 ).

Iteration 1. From the previous iteration, we have

1 1
(£2' (3) = (3.6,5.37). Solving P l (3.6,5.37) yields

1 1
(xl' x 2) = (2.3,1.3), (f~, f;) (3.6,4.9), fi .98 and
1 1
\2 1.4 and A13 O.

it is appropriate to point out one feature of the ISWT method

that reflects the self-correcting mechanism of this interactive
scheme. Not knowing the full detail of the internal structure of
the system, the IM acts according to his preference and suggests the
movement from x O to i l (figure 2). (Note again that
UCi1) > tP~ U(x o )). The level of objectives corresponding to
i 1 is (fi(i~), f~(x~), f~(i~)) = (1.3924,3.6,5.37). After incorpora-
tion with the model, we find that the point i 1 is actually an inferior
solution and in fact the point xl = (2.3,1.3), which is noninferior,
clearly dominates x-1 since
f 1 (x)
1 = .98 < f 11 (i 1 ) and f 13 (x 1 ) = 4.9 1 -1
< f 3 (x) while
1 1 1 -1 _.1 1 P
f 2 (x) = 3.6 = f 2 (x). \foreover U- = .564 > U(x) > U, indicating
the transitivity of preference (i.e., xl is really a better point
than x O in terms of utility; as well as being noninferior).
15ing xl and all information associated with it, the IM gives
wi2' Wi3 and step size a i which determine the direction of search d l
and the distance along d1 that we should move (as displayed in
figure 3).
Note: At x 1 , which is the optimal point of Pl(3.6,5.37) the con-
straint f 3 (x) ~ 5.37 is not binding and )i3 = O. We could use any
one of the two alternatives described in the previous section to
deal with this "zero-valued" Lagrange Mul tiplier case. By making
e~ a little smaller than 4.9 (while keeping E~ = 3.6) and solving
the corresponding P1(E~' E;), we could get a solution with both
multipliers positive to work with. But as E13 '" 4.9 so does A13
... O.


Posi tion .i
3 xl Xi
2 ).~2 ).~3 ciW~21 f2 (Xi) Ui

4.375 2.75 1.625 .25 .25 -1.11654 -0.9 1.68 .50156

3.6 5.37 2.3 1.3 1.4 0* - .25920 .80003 1. 32 .56400
3.25732 3.84192 2.36838 .73677 0* 1.26322 - .70734 .12951 .29 .59844
3 3.04913 3.88004 2.21822 .83091 .78894 .77462 - .37532 - .11310 .48 .60143
2.89900 3.83480 1. 96320 .93580 2.01880 .0548 .00527 - .46333 1.02 .60532
2.90427 3.37147 2.27429 .54859 0* 1.45140 .53124 .17103 .28 .61165
2.67263 3.41984 1. 92542 .74721 1.79724 .35642 .04935 - .25093 1.98 .61702
2.57493 2.92313 2.18463 .36925 0* 1. 63076 - .43482 .17965 .33 .62009
(f~'2. 354)
2.40935 2.98286 1.83584 .57351 1.80365 .52466 .03799 - .14078 2.52 .62694
2.31438 2.63091 1. 99785 .31655 .64165 1.36264 - .24253 .08367 .42 .63007
10 2.21289 2.66592 1. 75986 .45303 1.86662 .61366 - .02177 .09153 2.80 .63257
11 2.15187 2.40936 1. 89438 .25749 .93746 1.27378 - .16400 .05298 .49 .63427
12 2.06989 2.43385 1. 70573 .36396 1.90420 .68354 - .01368 - .06244 2.92 .63582
13 2.02992 2.25128 1. 80856 .22136 1.20848 1.1744 - .10872 .02946 .57 .63686
14 1. 96799 2.26802 1.66796 .30003 1.92822 .73586 - .00927 - .04529 3.14 .63773
15 1. 93885 2.12566 1. 75204 .18681 1.36546 1.13046 - .07951 .01965 .63 .63839
16 1. 83839 2.13813 1. 63865 .24974 1.94488 .77782 - .00655 - .03386 3.3'2 .63905
17 1. 86637 2.02587 1.70747 .15920 1.09654 1.09654 .10494 .01321 .43 .63939
18 1. 82144 2.03156 1.6D72H .21112 1.99312 .79232 .00076 •• 02857 2.00 .63956
19 1.81992 1. 97442 1.66542 .15450 1.64732 1.02184 • .03894 .00284 .84 .63996
20 1.78892 1.97679 1.60105 .18787 1.97154 .82636 • .00304 .02262 3.04 .64015
21 1. 77980 h 90895 1.65067 .12913 1.65558 1.04308 - .03637 .00523 .82 .64035
22 1. 75013 1.91320 1.58706 .16307 1.97790 .84798 • .00251 - .01639 .64053

* Conceptually when one >i

j (j-2,3) • 0, one should make the corresponding constraint tiahter to
ensure that both multipliers are positive. However, for this particular problem, and since
• j + fj(xi) from below, A:!j + 0, one may use Ai j • 0 and xi in further computations •

~ (E~, E;): min fl (x)= (x l -3)2 +(x 2-2)2

s.t. f2(x)= xl+x2'E~

f3(x)= xl+2X 2 (E;
gl (x) =-xl",O
g2( x) =-x 2 ~O



,,"" E2=3,84192
X =(1.5,0) / "" 3
E22 =3.25732
U(X")= (,64167)';/ X21
o ~----~--~L---~~--~~--~----~-----XI
1.0 2.0 3.0

Figure 2 Solution Trajectory in 'xl~x2' Space

Thus we can use Ai3 = 0 and xl in further computation as long as we

know that £~ will be decreasing.

The process continues in this manner. As there is no substan-
tial change after 20 iterations, the process is stopped at the 22 nd
iteration at which

(6.24,1.75,1.91), Ai~ = 1.978, Ag = .85 and u22 = 0.64053.

4.5 A


d B
' B 4:/(3.6,5.37)
~ .. , .9)

3.0 ~d3 ffi

d6 \d 5

"----', f3
1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0
Figure 3 Solution Trajectory in '£2-£3' Space

This result compares favorably with the true optimum which is

x* (1,5,0) and U(x*) = 0.64167.
We conclude this example with a remark that, when we have the
ideal DM and when optimality has meaning and is well defined, the
ISWT method will always converge to an optimal solution. However,
the rate of convergence is slow (first~order convergence) near the
optimum due to the zigzagging characteristic that is inherent in the
Zoutendjik's feasible direction method (which is used in part of the
construction of the ISWT method).


In this paper we have proposed an interactive algorithm for
assisting the DM in the multiobjective decision-making process.
The proposed method, which can be applied to both linear and non-
linear problems, was based partly on the main results developed in
Haimes and Chankong [1977] and partly on a newly developed relation-
ship between the utility function and the surrogate worth function
constructed by interacting with the DM. In this algorithm the
analyst generates a noninferior solution and provides the DM with
information about local behavior of the model in the form of 'trade-
offs' which are the values of appropriate Kuhn-Tucker multipliers.
The DM then reacts to such information and provides his preference
assessment in the form of 'surrogate worth.' Using the DM's values
of the surrogate worth, the analyst then generates a new noninferior
solution which has higher utility (according to the DM's preference)
than the previous solution. The process continues until the DM is
satisfied with the current solution. Under the assumption that the
DM is consistent, rational, and has a well-defined structure of
preference (i.e., his preference structure can be represented by a
monotonic nonincreasing utility function), the ISWT method can be
shown to converge to a best-compromised point (the point of maximum
A real-world case study performed elsewhere (Chankong 11977])
also seems to indicate that the ISWT method
1) is simple to understand and implement from the point of
view of the DM,
2) allows the DM to experiment with several possible al-
ternatives thus making the task of choosing the best al-
ternative easier and more accurate.
Experience reveals that the DM is usually reluctant to make a
judgment on something he does not understand and/or when he does
not know how his judgment is going to be used. This is why proper-
ties 1 and 2 are quite essential for a 'good' multiobjective
methodology. In the ISWT method those properties are contributed
by close participation of the DM in the actual solution procedure
which subsequently results in the DM learning from the model. To
make these claims more convincing, obviously, more experiments
using a variety of decision-makers and problems should be done and
the results statistically analyzed.


Belenson, S.M. and K.C. Kapur, "An Algorithm for Solving Multi-
criterion Linear Programming Problems with Examples," Operations
Research Quarterly, 24, No.1, March, 1973, pp. 65-77.
Benayoun, R., J. de Montgolfier, J. Turgny and 0.1. Larichev,
"Linear Programming with Multiple Objective Functions: STEP
Method (STEM)," Mathematical Programming, 1, No.3, 1971,
pp. 366 - 3 75 •
Chankong, V., "Multiobjective Decision-Making Analysis: The Inter-
active Surrogate Worth Trade-off Method," Ph.d. dissertation,
Systems Engineering Department, Case Western Reserve University,
Cleveland, Ohio, January 1977.
Chankong, V. and Y.Y. Haimes, "On Multiobjective Optimization
Theory: A Unified Treatment," submitted for publication in
Operations Research, 1976.
Cohon, J.L. and D.H. Marks, "A Review and Evaluation of Multi-
objective Programming Techniques," Water Resource Research, 11,
No.7, April, 1975, pp. 208-220.
Fishburn, P.C., "Bernoullian Utilities for Multiple-Factor Situa-
tion," in J.L. Cochrane and M. Zeleny, Eds., Multiple Criteria
Decision Making, University of South Carolina Press, Columbia,
South Carolina, 1973, pp. 47-61.
Geoffrion, A.M., J.S. Dyer and A. Feinberg, "An Interactive
Approach for Multi-Criterion Optimization with an Application to
the Operation of an Academic Department," Management Science,
19, No.4, 1972, pp. 357-368.
Haimes, Y.Y., Hierarchical Analyses of Water Resources Systems:
Modeling and Optimization of Large-scale Systems, McGraw-Hlll
International Book Company, Nel'! York, New York, 1977.
Haimes,Y.Y., and V. Chankong, "Kuhn-Tucker as Trade-offs
in Multiobjective Decision-Making Analysis," submitted for
publication in Automatica.
Haimes, Y.Y. and W.A. Hall, "Multiobjectives in Water Resources
Systems Analysis: The Surrogate Worth Trade-off Method," Water
Resources Research, 10, No.4, 1974, pp. 615-624. -----
Haimes, Y.Y., W.A. Hall and H.T. Freedman, Multiobjective Optimiza-
tion in Water Resources Systems, Elsevier Scientific Publlshlng
Co., Amsterdam, 1975.
Hall, W.A. and Y.Y. Haimes, "The Surrogate Worth Trade-off Method
with Multiple Decision-Makers," in Multiple Criteria Decision
Making: K1oto, 1975, M. Zeleny, Editor, Springer-Verlag, Inc.,
New York, 976, pp. 207-233.

References (continued)
Lasdon, L.S., O¥timization Theory for Large Systems, MacMillan Co.,
New York, N• . , 1970.
Luenberger, D.G., Introduction to Linear and Nonlinear Programming,
Addison-Wesley, Reading, Massachusetts, 1973.
MacCrimmon, K.R. and D.A. Wehrung, "Trade-off Analysis: Indiffer-
ence and Preferred Proportion," Proceedings of a Workshop on
Deicison Making with MUltiyle Confllcting ObJectives! Inter-
national Institute for Ap~ fed ~ystems Analysis, Sch oss
Laxenburg, Austria, Vol. ,Oct. 1975.
Maier Rother, C. and M.F. Stankard, Jr., "A Linear Programming
Approach to Choosing between Multi-objective Alternatives,"
presented at the 7th Mathematical Programming Symposium, the
Hague, 1970.
Miller III, J.R., "Professional Decision-Making: A Procedure for
evaluating Complex Alternatives," Praeger Publishers, N.Y., 1970.
Miller, D.W. and M.K. Starr, The Structure of Human Decisions,
Prentice-Hall, Englewood cliffs, New Jersey, 1967.
Monarchi, D.E., C.E. Kisiel and L. Duchstein, "Interactive Multi-
objective Programming in Water Resources: A Case Study," Water
Resources Research, 9, No.4, August 1973, pp. 837-850. -----
Rarig, H., "Two New Measures of Performance and Parameter Sensi-
tivity in Multiobjective Optimization Problems," M.S. Thesis,
Case Western Reserve University, Cleveland, Ohio, 1976.
Savir, D., "Multi-objective Linear Programming," Operations Research
Center, College of Engineering, No. ORC 66-21, University of
California, Berkeley, Calfornia, 1966.
Steuer, R.E., "An Interactive Linear Multiple Objective Programming
Approach to Forest Management," presented at ORSA/TIMS joint
national meeting at Miami, Florida, Nov., 1976.
Stevens, S.S., "Measurement, Psychophysics and Utility," in C.W.
Churchman and R. Ratoosh (eds.), Measurement: Definitions and
Theories, John Wiley and Sons, New York, 1959, pp. 18-63.
Thiriez, H. and S. Zionts, eds., Multiple Criteria Decision Making
Jouy-en-Josas, France 1975, Springer-Verlag, Berlln, Helde1berg,
New York, 1976.
Wallenius, J., "Comparative Evaluation of Some Interactive
Approaches to Multicriterion Optimization," Management Science,
Vol. 21, No. 12, August 1975.
Zeleny, M., Multiple Criteria Decision Making, Kyoto 1975, Springer-
Verlag, Berlin, Heidelberg, New York, 1976.
Zionts, S. and J. Wallenius, "A Interactive Programming Method for
Solving the Multiple Criteria Problem" Management Science, Vol.
22, No.6, February 1976.

James S. Dyer
University of California, Los Angeles
Rakesh K. Sarin
Purdue University

This paper presents a theory of cardinal social welfare func-
tions for choices of certain consequences. This development
parallels the social welfare theory for choices among risky
consequences based on the assumption of social choice preferences
consistent with the von Neumann-Morgenstern axioms. In the case
of certainty, however, this axiomatic base is replaced with one
from the theory of difference measurement. The result allows
expressions of "strength of preference" to be explicitly inc or-
porated into decision rules for social choice. Included are
specifications of the independence conditions that imply additive,
multiplicative, and more complicated forms of cardinal social
welfare functions.


This paper presents a theory for cardinal preference aggrega-

tion rules for the case of certainty. We use the general term
"preference aggregation rule" (PAR) rather than either of the more
familiar terms "social welfare function" or "collective choice
rule," since our development is consistent with either interpreta-
tion. That is, this rule may be used by a benevolent dictator to
guide decisions that affect the welfare of a group of individuals,
it may be used by the group itself as a basis for participatory
group decision making, or it may represent individual j's moral
(or social) preferences where individual j is a member of the
relevant group (Harsanyi [19751).
This development is essentially an interpretation of recent
developments in multiattribute utility theory in the context of
the group decision problem. Dyer and Sarin [1977a, 1977b1 have
explored conditions that imply the existence of a cardinal multi-
attribute utility function in the case of certainty, which they
call a measurable multiattribute value function to distinguish it
from the cardinal utility theory based on the von Neumann-Morgen-
stern axioms involving risk. The mathematical underpinnings of this
extension are virtually identical, so we shall concentrate on the
implications of the conditions when they are interpreted in this new

1.1 Related Work and Motivation

The related work on preference aggregation rules is sharply
divided between the case of certainty and the case of risk. In the
case of certainty, all of the previous work of which we are aware
has dealt with either ordinal rankings by the group members, or with
their ordinal value functions, and the PAR's provide only ordinal
rankings of the alternatives. In the case of risk, there are several
developments based on the cardinal utility functions of the indivi-
duals that also lead to a cardinal PAR. Since we are concerned with
the existence of a cardinal PAR in the case of certainty, we shall
briefly review both areas.
The majority of the work on PAR's in the case of certainty has
been based on the ordinal rankings of alternatives by the indivi-
duals. These rules have the advantage of requiring information
from individuals that is relatively easy to obtain. Unfortunately,
these rules may not even provide a transitive ranking of the alter-
natives, and suffer from the limitations elegantly identified in

Arrow's well-known possibility result [Arrow, 19511. Reviews and

extensions of this area of research are provided by Fishburn [19731,
Plott [19761, and Sen [19701.
More relevant to our development is the work by Fleming [19521
and Fishburn [19691. They develop conditions for the existence of
an ordinal additive PAR that is based on ordinal value functions of
the individuals. These developments use conditions that are essen-
tially equivalent to preferential independence in multiattribute
utility theory in order to obtain the additive form. They are
limited, however, in two important ways. First, since both the
PAR and the individual value functions are ordinal, they cannot be
given a "strength of preference" interpretation. This interpreta-
tion may be very important in attempts to make interpersonal
utility comparisons. Second, the assessment of this ordinal PAR
would require a simultaneous conjoint scaling procedure to interlink
the value function scales of the individuals. The conceptual
difficulty of this procedure raises serious questions about the
practical value of these results.
In the case of risk, Harsanyi f19551 presented the case for a
cardinal PAR consistent with the Baysian (von Neumann and Morgen-
stern) rationality axioms. One of the most compelling aspects of
his development is its simplicity. He assumed the following condi-

HI: The personal preferences of all individuals satisfy the

Baysian rationality axioms.
H2: The group preferences satisfy the Baysian rationality
H3: If all individuals are indifferent between two alternatives
defined by probability distributions over the consequences,
then the group will be indifferent between them.

Taken together, these three conditions imply that the group PAR will
be a linear combination of the individual utility functions.
Critics of Harsanyi's formulation have attacked both H2 and
H3 as being unreasonable assumptions. Fishburn [1976] notes that
lotteries are an acceptable basis for group decision making only
in very restrictive, special cases. Both Diamond [1967] and Sen
[1970] have argued that the Baysian rationality postulates may
not be appropriate for a group (H2), since they do not consider
the issue of equity of the outcomes. Harsanyi [1975] argues
against these objections to H2, but he does admit that an indi-
vidual might reject H3. By rejecting H3, however, an indivi-
dual indicates that the well-being of the members of the group is
not his overriding concern, " ... and that he is willing to sacrifice
their well-being, at least in some cases, to his own egalitarian
preferences when the two conflict with each other" [Harsanyi, 1975].
Assuming both Hl and H2, but weakening H3, Keeney and
Kirkwood [1975] provide conditions for the existence of nonlinear
PAR's. These developments are discussed by Keeney and Raiffa
[Ch. 10, 1975].
In order to apply these cardinal PAR's for risky consequences,
it is necessary to make interpersonal utility comparisons. Some
authors reject this concept, but we agree with Harsanyi [1975] that
such comparisons are, in fact, commonly made in practice. Unfor-
tunately, these PAR's involve risky utility functions for indivi-
duals (H1), and it is well-known that risky utility functions
cannot be legitimately interpreted as revealing strength of pre-
ference (e.g., see Ellsbergh [1954]). This may complicate the
difficult problem of scaling and weighting the individual utility
functions. Further, the PAR itself cannot be given a strength of
preference interpretation for the group.
Finally, a group may occasionally face a situation in which a

decision must be made by (or for) them under conditions of cer-

tainty, and they may wish to consider strength of preference as
well as ordinal rankings as a basis for their decisions. Dyer
and Miles [19751 confronted this situation in a real-world trajec-
tory selection problem for the Mariner Jupiter/Saturn 1977 Project
(now re-named the Voyager). Since the only PAR's that provided a
cardinal measure were based on risk, they artificially introduced
lottery questions into their assessment procedures in order to
obtain some indicat~on of strength of preference. They discuss
some of the problems created by the use of a PAR for risky choice
in the case of certainty.
Our development mitigates some of the problems mentioned
above, and fills an obvious gap in the literature. We present
cardinal additive and non-linear PAR's for the case of certainty.
These rules do allow a "strength of preference" interpretation for
both the measurable value functions of the individuals and for the
PAR's themselves.

1.2 Plan of the Paper

Section 2 presents the conditions for the existence of a
cardinal PAR that will be a linear combination of the individuals'
measurable value functions. As we shall see, there are some nota-
ble differences between our development and Harsanyi's [19521
development for the risky case. In particular, we avoid an assump-
tion analogous to his condition H2 on group preferences. In
section 3, we obtain the multiplicative form of cardinal PAR's
for certainty. The conclusions are in section 4.


We let X be the set of all possible consequences that might

affect a group in a particular situation, and x e X is a specific

consequence that may be vector-valued. Throughout this discussion,

we assume that the group contains n > 3 individuals, and the
preferences of each individual are essential to the group. Our
first condition is analogous to Harsanyi's condition HI.

AI. (Individual rationality): The personal preferences of

all individuals satisfy the measurable value function

Axiom systems for measurable value functions include the topologi-

cal conditions by Debreu [1960] and the algebraic conditions by
Suppes and Zinnes [1963], with the latter being more open to evalua-
tion as a basis for rational behavior. Essentially these axiom
systems require comparisons of "preference differences" in order
to be operationalized.
Some may consider A1 to be the Achilles heel of this develop-
ment, since these axiom systems have never been generally
accepted as providing a normative definition of rational behavior,
as the Baysian rationality axioms have been. Nevertheless, notions
of "preference differences" and "strength of preference" seem inex-
tricably intermingled with the notion of interpersonal comparisons
of utility, so it seems impossible to reject one without also
rejecting the other.
From A1, for each individual i in an n-person group, we
obtain vi: X + Vi ~ Re, so that the relevant impacts of the
consequence x on the n-person group can be represented by the
n-vector v = (vl(x), ... , vn(x». We shall have occasion to
partition {I, •.. , n} into nonempty sets I and I, so that
V n
IIi=IV i can be represented by VI x VI' Notice that i f
I {i} , then VI = VI x .. .
x Vi _ 1 x Vi+l x ., x V
A2. (Weak order): The group preferences for consequences

are connected and transitive.

A2 is equivalent to the assumption of a group preference relation

~ on X that is a weak order. Notice that the relation ~ on
X implies the relation ~' on V defined as vex) ~' v(y)
if and only if x ~ y for any x,y e X. Although our conditions
will be stated in terms of the relation ~, it will often be
instructive to investigate their implications in terms of ~'.

A3. (Preferential independence): If any subset I of

individuals is indifferent between two consequences, then
the group preference is determined only by the preferences
of the individuals in subset I.

This assumption, which leads directly to additivity in the case of

certainty, is similar to those used by Fleming [1952] and Fishburn
[1969] in the development of an ordinal PAR.
We now come to a series of three additional conditions that
are primarily of technical interest, since they would seldom be
violated in practice.

A4. (Lower bound): Suppose all individuals in the group are

indifferent among all consequences except for individual
i. There exists a consequence xi * such that all other
consequences are at least as desirable as xi*' and we
denote vi(x i *) as

For example, the outcome of "death" for individual i might be an

obvious x i *' so that vi(death) = v i *. It is important to note,
however, that we have not yet assumed a positive ordinal relation-
ship between the preferences of individual i and the preferences
of the group.

A5. (Solvab~lity): Suppose that individual i is the only

individual in the group not necessarily indifferent
between consequences xl and x2 and the group does
not prefer a consequence xl to another consequence
x2 . Further, suppose there exists yl e X such that
xl is not preferred to Y , is not preferred
to x2 by the group. Then there must exist y2 such
that individual i is the only member in the group not
necessarily indifferent between and xl (and y2 x 2 ),
but the group is indifferent between y 1 and Y2
This must be true for any individual i = 1, ... , n.

If all individuals except i are indifferent between xl and x2 ,

then vI(x) = vI(x 2 ) = vIx. Further, by A5, (V i (X 2 ),vIx) ~'
v(yl) ~' (vi(xl),vIx). Then, there must exist y2 E X such that
vI(y2) = vIx and (v i (y2),vIx) ~, v(yl). This restricted solva-
bility condition essentially requires V and X to be infinite.

A6. (Archimedian): If there exist consequences xl and x2

so that individual i is the only member in the group
not indifferent between them, and the group is not indif-
ferent between them, then the preference difference of
the group can be divided into a finite number of arbitrar-
ily small, equal intervals.

Taken together, the interpretation of the conditions A2, A3,

A5, and A6 in terms of ~' are a restatement of the axioms for
the existence of an additive conjoint structure on V [Krantz,
et al., 1971]. The following result is immediate.

Theorem 1. Conditions Al - A3, A5, A6 hold if and only if there

exists W: X + Re such that for any x1 , X
2 E X,

there exist wi: X -to Re, i 1, ... , n such that

and each is unique up to a positive affine


Proof: Conditions Al - A3, A5, A6 imply that (V l , ••. , Vn , ~~

is an n-component, additive conjoint structure. Therefore, there

exist W':V -+ Re such that for any v 1 , v 2 e V, vl ~' v 2 if and

i = l , ... ,n, such that W'(v) = I W!(v.). We extend this
i=l 1 1

result to the domain X in the obvibus manner, defining

W(x) = W'(v(x)) and Wi(x) = wi(vi(x)), i = 1, ... , n, for all
x E X.

Notice that is some transformation of individual i's measur-

able value function. We would need one additional condition to
guarantee that each Wi is a positive monotonic transformation of
vi (see Keeney and Ra1ffa, Sec. 10.2.2 [1976]).
Unfortunately W provides only an ordinal scale of measurement
and would be difficult to assess in practice. In order to obtain a
cardinal scale, we must introduce the concept of group preference

A7. (Difference consistency): Suppose individual i is the

only individual in the group not necessarily indifferent
between consequences xl and x2 , and the group does
not prefer xl to x2 . If all of the other members in
the group are also indifferent between xi * and xl (and
x 2 ), then the preference difference of the group between
xl and xi * is not preferred to the preference difference

between x2 and This must be true for any indivi-

dual i = I, ... , n.

We need additional notation to define group preference differences

between consequences. Let X* {xl, x 2 1xl, x 2 e X and xl ~ x2}
be a nonempty subset of X x X, and
denote a binary relation

on X*. By A7, x 2 ~ xl implies x 2 ,x i* ~* x I ,x i *·

As before, we define >- * , so that v(x l ),v(x 2 ) ~* , v(x 3 ),v(x 4 )
if and only if x I ,x 2 ~* x 3 ,x 4 , and >.>*' is defined on
V* = {v l v 2 Iv l ,v 2 E V and vI r' v 2 }, a nonempty subset of

A7, VI(x l ) = VI(x 2 ) = vI(x i *) = vIx

vi(x I )vIx' then we must also have

vi(x )VIx,vi*vIx' This notion seems fund a-
mental to the concept of preference differences.

A8. (Difference independence): If all individuals are

indifferent between the preference differences between
two pairs of consequences, then the group will be
indifferent between them.

A8 is equivalent in many respects to H3, so it deserves special

attention. Suppose individual i is indifferent between xl and
y, and between x2 and 2
y, while all other members of the
group are indifferent between xl and x2 and between
and yl
VI(yj) = VIy' vi(x j ) = v i (yj) = vi'
y2. We let vI (x j ) = vIx'
I 2
j = I or 2. Then, by A8, vivIx >" vivIx if and only if
I 2 I 2
vivIx,vivIx ~*' vivI¥,ViVIy for any VIx,VIy E VI' In the
context of measurable multiattribute value theory, this condition
implies that Vi is difference independent of VI (Dyer and Sarin
An example may help to illustrate this important idea. Suppose
that n = 3, so v = (v l ,v 2 ,v 3 ), and the measurable value function

Vi for individual i is scaled from ° to 1.0. Now, consider

these alternatives:
v(x l ) (0.S,0.2,0.3)
v(x 2 ) (0.3,0.2,0.3)
v(yl) (0.S,0.9,0.7)
v(y2) (0.3,0.9,0.7)

According to AS, the group must consider the preference difference

from x 2 to x 1 to be identical (indifferent) to the difference
from y2 to yl. Individuals concerned with "equity" may feel that
it is more important to increase vI from 0.3 to O.S when v2 = 0.9
and v3 = 0.7 than when v2 = 0.2 and v3 = 0.3. Rawls [1971] for
example proposes that we measure the welfare level of society by the
"utility level" of the worst-off individual. According to this
theory, the society welfare level would be identical given xl or
but yl would be preferred to 2
Y .
Notice also, given AS, that if all individuals are indifferent
among all consequences except individual i, then the group and
individual i will have identical rankings of preference differences
if group and individual i's preferences are positively related.
Otherwise, their respective rankings are inversely related.
Finally, we are ready for our primary result.

Theorem 2. For n ~ 3, conditions Al - AS hold if and only if

there exists Wc' X + Re such that the following are
(1) for any x 1 ,x 2 E X, X
~ x2 if and only if
W (xl) > W (x 2 )
c - c
(11) for any x1 ,x 2 ,x 3 ,x 4 e X such that xl ~ x 2 and
1 2 4 if and only i f
x 3 ~ x4 , then x ,x >r* x 3 ,x

W (xl) - Wc (x 2 ) ~ Wc (x 3 ) - Wc(x4)

(iii) if W* is another function with the same

property, then there exist constants a > ° and

~ such that W~ = aWc + ~.

L Ai v . (x)
i=l l

Proof: As before, we state the proof in terms of the relation ~'

defined on V. From Theorem 1, Al - A3, A5, and A6 ensure there

exists an additive W~: V + X such that Wc(x) = W~(v(x» satis-

fies (i). Further, A4, A7 and the difference independence

implication of A8 ensure that W'c is a cardinal scale of measure-

ment with a "preference difference" interpretation (see Dyer and

Sarin [1977a] for the detailed proof), so (ii) and (iii) are true

for Wc(x) = W~(v(x». Also by A8 for all consequences x such

that vI(x) is fixed, Wc(x) either maintains the same rankings

of preference differences as vi(x), or the inverse ranking.

Therefore, there exist Ai ~ ° and b such that Wc(x) = Aivi(x) +

b(vI(x» for all i, satisfying (iv).

We emphasize that this development allows Ai < 0, but

Harsanyi's additive representation in the case of risk also allows

negative scaling constants. The additional assumption that group

and individual preferences have an ordinal positive relationship

could be added to either theory to ensure that the scaling constants

are positive. These scaling constants, or "weights," are often

interpreted as reflecting an interpersonal comparison of the indivi-

duals' preferences.

NOW, let us compare our conditions to Harsanyi's conditions for

the case of risk. We are immediately struck by the simplicity of his

three conditions compared to our list of eight. However, this may be

a bit misleading. First of all, his condition HI and our Al are

analogous, as are his H3 and our AS. Therefore, we are left with

H2 and A2 - A7, which must somehow be similar in spirit.

As we have noted, A2 - A7 (plus AS) imply that the group
preferences satisfy the measurable value function axioms. AS plays
a dual role here, since it is both an independence condition similar
to H3 and it is necessary to ensure measurability of the PAR. Note
that H2 assumes the "Baysian rationality axioms." If Harsanyi
had listed each one of these axioms instead of combining them into
the statement of H2, his development would have appeared more
similar to ours.
To illustrate this, suppose we state the following condition.

A9. (Group rationality). Group preferences satisfy the

measurable value function axioms.

This new condition leads to a result more transparently similar to


Theorem 3. For n ~ 3, conditions AI, AS and A9 hold if and

only if there exists Wc:V + Re such that (i), (ii),
(iii), and (lv) of Theorem 2 are true.

Proof: By A9, there exists Wc:X ~ Re that is a measurable

function, satisfying (i), (ii), and (iii) of Theorem 2. Further,
from the relationship between ~ and ~', there exists a measur-
able function W~: V + Re such that Wc(x) W~(v(x)) for all
X E X. Fishburn [1970, p. 93] shows that w~ may be written in an
additive form if and only if there is a fixed element
(e l , ...
, en) E V such that, whenever i e n, ... , n} and
4 iff
v J = vJ and v3 = v1 for all J 'I i, v 1 ,v 2 >- * , v 3 ,v
1 2
1 4
(vi,e J for J'Ii),(vi,e j for j'li) >- * , (vi,e
j for J'Ii),(vi,e J for J 'Ii) .
It is easy to see that the difference independence implication of
A8 implies this condition. As before, AS also ensures that for
a fixed value vI E VI' WC and Vi maintain the same or the inverse

rankings of preference differences, so Wc(x) L LV.(x).
i=l l l

In practice, the choice between the conditions of Theorems 2 and 3

should rest on the following question. Is it easier to verify (or
assume) A3 and A7, or A9? Recall, of course, that some technical
conditions similar to A2 and A4-A6 are implied by A9.


Given the previous development for the additive PAR, it is a

simple matter to identify the co.nditions for a multiplicative PAR
for certainty that is also a cardinal function. This is

accomplished by weakening difference independence to a condition

analogous to the concept of utility independence in multiattribute
utility theory.

AID. (Weak difference independence). If all individuals

except individual i are indifferent among four conse-
quences, then the group ranking of preference differences
between two pairs of consequences will be identical to
individual i's ranking. This must be true for any
individual i.

Dyer and Sarin [1977b] define two measurable value functions as

strategically equivalent if and only if they imply the same ranking
of preference differences for any two pairs of consequences. From
AID, if all other individuals except i are indifferent among four
consequences, then the conditional PAR and vi are strategically
equivalent, which means that either can be written as a positive
affine transformation of the other.
In addition, AID implies that Vi is weak difference indepen-
dent of VI for i = 1, ... , n (Dyer and Sarin [1977b]). To see

this, suppose there exist consequences x j ,yj E X such that

vI(x j ) = vIx' vI(yj) = VIy' and vi(x j ) = vi(yj) = j
vi '
j 1, ... , 4. Further, suppose individual i prefers x1 to 2
and the preference difference between
x , xl and x2 to
the difference between x 3 and x 4 . Then
1 2
vivIx,vivIx ~*'
3 4
vivIx,vivIx' and >r* ,
for any VIy e VI.
We also need to modify A4 to allow for an upper bound as

A4': (Bounded). Suppose all individuals in the group are

indifferent among all consequences except for lndividual
1. There exist consequences and such that
individual i considers all other consequences at least
as desirable as and none more desirable than

As we might suspect from multiattribute utility theory, these

conditions lead immediately to the multilinear form of a PAR.

Theorem 4. If conditions AI, A4', A9, and AIO hold, then there
exists a measurable PAR Wc: X + Re, and

n n
hivi(x) + L Aiv. (x) Vj(x) + ...
i=l i=l ~

where the Wc and the V.' S

are scaled from o to 1,
the A's are scaling constants and 0 < Ai < 1, for
all 1.

Proof: The relationship between ~ and ~' ensures the existence

of the measurable function W': V + Re. As discussed, AIO implies

that Vi is weak difference independent of VI for i = 1, ... , n,

and weak difference independence in measurable multiattribute value
theory is equivalent to utility independence in risky multiattribute
utility theory. From Corollary 2 in Dyer and Sarin [1977b], there
exists the multilinear function

n n
W~ (v) L A. wi' (vi) + iI1AijWi(Vi)Wj(Vj) +
i=l 1

Again, we can define in terms of W'. We obtain our result by

noting that Ala also requires vi(x) and Wc(x) to be strategi-
cally equivalent for fixed values of VI' so by our choice of
scaling, wi(v i ) = Vi·
Finally, we can obtain the multiplicative form in a similar

Theorem 5. For n ~ 3, if conditions Al, A3, A4', A9, and Ala

hold, then either
n n
1 + AWc(x) II [1 + HiVi(x)] if L Ai "F 1,
i=l i=l
n n
Wc(x) LAiV.(X) if L A. 1
i=l 1 i=l 1

where Wc and the Vi'S are scaled from 0 to 1,

the A's are scaling constants, a < Ai < 1 for all
i, and A > -1.

Proof: Condition Ala for individual i = 1, and Al, A3, A4',

and A9 are a restatement of conditions used by Dyer and Sarin
[1977b] to demonstrate that there exist wi: Vi ~ Re such that

n n
1 + AW~(V) IT [1 + AAiwi(v i )] if I A. 1 1
i=l i=l 1

n n
W~(v) I Aiwi(v i ) if l Ai 1
i=l i=l
where wi(v i* ) = 1, and wi(v i *) = O. Again, we extend the result
to X in the natural way. Our choice of scaling and AID ensures


By continuing to interpret the measurable value function theory

in the context of a PAR, we could obtain other extensions and other
forms of PAR's. However, these extensions are straightforward, so
we omit them here.
The key question, of course, concerns the operational signifi-
cance of these results. Plott [1976] contends that rankings of
preference differences have no general, unique meaning, and
Fishburn [1970] expresses similar concerns. If we accept these
arguments, however, we are left with PAR's in the case of certainty
that only allow rankings or ordinal value functions as expressions
of individual preference.
We agree with Harsanyi [1975] and with Shapley and Shubik
[1974] that the notions of preference intensity and interpersonal
comparisons of utility are commonly used in group decision making.
Therefore, we feel that a theory of PAR's for the case of certainty
that does allow a preference difference interpretation is an impor-
tant development with potential for practical applications.


Arrow, K. J. (1951) (2nd ed. 1963). Social Choice and Individual

Values, 2nd edition, Wiley, New York.
Debreu, G. (1960). "Topological Methods in Cardinal Utility Theory,"
in K. J. Arrow, S. Karlin, and P. Suppes (eds.), Mathematical
Methods in the Social Sciences, 1959, Stanford University
Press, Stanford, California.
Diamond, P.A. (1967). "Cardinal Welfare, Individualistic Ethics,
and Interpersonal Comparisons of Utility: Comment," Journal of
Political Economy, 75, 765-766.
Dyer, J., and R. Miles (1975). "Trajectory Selection for the Mariner
Jupiter-Saturn 1977 Project, " Operations Research, ~, 220-244.
Dyer, J. and R. Sarin (1977a). "An Axiomatization of Cardinal
Additive Conjoint Measurement Theory," Working Paper No. 265,
Western Management Science Institute, UCLA, Los Angeles,
Dyer, J., and R. Sarin (1977b). "Measurable Multiattribute Value
Functions," Discussion Paper No. 66, Management Science Study
Center, Graduate School of Management, UCLA, Los Angeles,
Ellsbergh, D. (1954). "Classic and Current Notions of 'Measurable
Utility,'" Economic Journal, ~, 528-556.
Fishburn, P. (1969). "Preferences, Summations~ and Social Welfare
Functions," Management Science, 1, l79-l~6.
Fishburn, P. (1970). Utility Theory for Decision Making, Wiley,
New York.
Fishburn, P. (1973). The Theory of Social Choice, Princeton Univer-
sity Press, Princeton, New Jersey.
Fishburn, P. (1976). "Acceptab.i.e Social Choice Lotteries," The
Pennsylvania State University, University Park, Pennsylvania.
Fleming, M. (1952). "A Cardinal Concept of Welfare," Quarterly
Journal of Economics, 66, 366-384.
Harsanyi, J. (1955). "Cardinal Welfare, Individualistic Ethics, and
Interpersonal Comparisons of Utility Theory," Journal of
Political Economy, 63, 309-321.
Harsanyi, J. (1975). "Nonlinear Social Welfare Functions," Theory
and Decision, ~, 311-322.
Keeney, R., and C. Kirkwood (1975). "Group Decision Making Using
Cardinal Social Welfare Functions," Management Science, ~,
Keeney, R., and H. Raiffa (1976). Decisions with Multiple Objec~
tives, Wiley, New York.

Plott, C. (1976). "Axiomatic Social Choice Theory: An Overview and

Interpretation," Social Sciences Working Paper No. 116,
California Institute of Technology, Pasadena, California.
Rawls, J. (1971). A Theory of Justice, Harvard University Press,
Cambridge, Massachusetts.
Sen, A. (1970). Collective Choice and Social Welfare, Holden-Day,
San Francisco, California.
Shapley, L., and M. Shubik (1974). Game Theor* in Economics --
Chapter 4: Preferences and Utility, R-90 /4-NSF, The Rand
Corporation, Santa Monica, California.
Suppes. P., and J. Zinnes (1963). "Basic Measurement Theory,"
in R. Luce, R. Bush, and E. Galanter (eds.), Handbook of
Mathematical Psychology, Vol. 1, Wiley, New York.

Hillel J. Einhorn* and William McCoach**

*Graduate School of Business, University of Chicago

**General Cable Corporation, Greenwich, Connecticut

The general problem of how to determine the worth or utility
of alternatives that vary on many dimensions is of great practical
importance. Although the number and types of situations that require
such evaluations are large, the most usual way of performing such
tasks has been unaided "intuition" (or, clinical judgment); Le., the
decision maker somehow does a mental trade-off analysis between the
vario~s attributes and alternatives in order to come to an evaluation/
decision. The cognitive difficulties of performing such a feat are
formidable. For example, consider a situation with ten alternatives,
each varying on six attributes. The intuitive decision maker has the
task of locating ten alternatives in a six dimensional indifference
space and picking the one with the highest utility. In such complex
situations, an accumulating· body of psychological research on the
decision process has shown that people will reduce task complexity by
using various heuristics (e.g., Tversky, 1969; 1972; Payne, 1976).
While these heuristics have the advantage of allowing a decision maker·
to perform a complex task, they may lead to non-optimal behavior
(e.g., consistent intransitivities). Furthermore, the literature on
clinical judgment (Meehl, 1954; Sawyer, 1966) has also shown that
experts have great difficulty in intuitively combining information in
appropriate ways. Therefore, ·there is a need for an analytic method
for determining the worth of multiattributed alternatives.
Decision theorists have attacked the above problem by formulat-
ing methods that attempt to assess the worth of objects/alternatives

via mathematical/statistical procedures (Edwards, 1971; Huber, 1974--

for a review; Keeney & Raiffa, 1976). The logic behind all of these
approaches involves the decomposition of an alternative into its
constituent attributes; the determination of the utility of each
attribute; and the mechanical combining (via some mathematical
function) of the utilities into a composite or total utility. The
typical goal of such an analysis is to get a rank order of alterna-
tives based on total utility. This order can then be used for
evaluation and/or decision making purposes. Applications of the
general method appear in many diverse areas; e.g., developing water
quality indices (O'Connor, 1973), land management (Gardiner &
Edwards, 1975), and airport location (Keeney & Raiffa, 1976). Fur-
thermore, it is interesting to note that the development of a
"composite criterion" from sub-criteria, discussed by industrial
psychologists with respect to job performance (Blum & Naylor, 1968),
is formally identical to the multiattribute utility task (although
it is rarely recognized as such).
Our intent in this paper is to propose a simple multiattribute
utility procedure (SMAUP) when certain conditions are met. We feel
that the use of any method is negatively related to its complexity.
Therefore, while our method may lack elegance and sacrifice some
rigor, it is simple to implement. However, we attempt to provide a
rationale for our procedures by looking at the utility assumptions
it makes. This is useful because it provides for a clearer theoreti-
cal foundation underlying the technique as well as showing the condi-
tions where it should or should not be used.
In order to make the discussion less abstract, we consider the
use of SMAUP within the context of the evaluation of overall perfor-
mance of players in the National Basketball Association (NBA). Infor-
mation about the quality of overall performance could be of great
value to general managers who have to determine salary, whether to

keep or trade a player, etc., based on last year's performance. While

the job of a general manager also involves the "matching" of different
types of players to make for an effective team, this latter problem
is of sufficient complexity to merit a separate paper. Therefore,
our concern is in obtaining a rank ordering of players based on their
total utility as determined by SMAUP. Finally, we compare our
results with an independent measure of overall performance (viz., the
players who are picked for the NBA all-star team) and discuss the
advantages of SMAUP vs. clinical judgment.

We begin by examining the various steps in the SMAUP procedure
while providing rationales as we go along.
(1) Q?_~~ining and measuring attributes. The first step is to
decompose the objects/alternatives into their relevant attributes.
This requires that one have substantive knowledge regarding the
phenomenon of interest. The use of "experts" may be useful here
since they are most likely to know what is "relevant" and what isn't
(we discuss the problem of how many attributes to have later in the
paper). For our example of player performance, the following eight
attributes were used: field goal percentage (FG%) , free throw per-
centage (FT%) , rebounds (REB), assists (ASSIS), steals (STS) , per-
sonal fouls (PF) , points per minute played (P/M) , and blocked shots
(BS). These attributes cover both offensive and defensive aspects of
performance, although they are certainly not exhaustive. Once the
attributes are obtained, it is necessary to measure each object/
alternative on each of the attributes. In our example, the NBA keeps
records of every player's performance on a large number of attributes.
Therefore, for every object (player) we have a measure of that
player's "score" on each of the eight attributes. Note that these
attributes can be objectively measured (and contain very little
measurement error). We discuss the problems that result when the

attributes must be subjectively measured in the discussion section.

(2) Determining the utility of each attribute. We introduce

the following notation: Let

"score" or amount that ith object has of the

jth attribute (i = 1, 2, ... , N; j = 1, 2, ... , k).

Each object can now be represented by a k-component vector,

~i = (xiI' x i2 ' ... , x ij ' ... , x ik )· We need to know how much

utility each attribute score adds to the total. Therefore, we need

the utility function,


i.e., how do the i scores on attribute j relate to utility.

There are a number of methods that have been proposed for finding
u(X j ) (cf. Yntema and Torgerson, 1961), but all involve the use of
experts to make utility judgments of some kind (one method asks
experts to draw the utility function). At this point we introduce
our first simplification under the following conditions. Determine
if each attribute is monotonically related to overall utility; i.e.,
does overall utility only increase (or decrease) when the scores
within an attribute increase or decrease. For our attributes this
certainly seems to be the case--if something is "good," more is
better; if something is "bad," less is better. The next step is to
redefine or rescale the attributes so that they are monotonically in-
creasing in utility. For example, "personal fouls" would seem to be
monotonically decreasing in utility. However, if we redefine this
as "lack of personal fouls" (LPF) , and multiply each score by -1, it
is increasing in utility.
(a) Our next step is again a simplification. Once the
attributes are monotonically increasing in utility, assume that the
utility function is linear, i.e.,

(b. > 0) (2)

The hypothesis underlying (2) is that the linear function will ap-
proximate any monotonic function sufficiently well for practical
purposes. The reason for believing this is that the plausible range
of values for x. will be considerably smaller than the possible
range of xj . Therefore, the discrepancies between a nonlinear func-
tion (but monotonic) and a linear function will be smaller in the
middle range of the utility function. For example, consider "field
goal percentage." While it is theoretically possible for this
attribute to vary from 0 to 100%, the plausible range is considerably
less (25% to 65%). Therefore, even if the utility function is highly
concave or convex, our assumption is that the middle range can be

approximated by using a linear form.

(b) Equation (2) has two parameters that need to be estimated
(the slope, bj , and the intercept, a j ). However, our purpose is
to combine u(x ij ) over the j attributes. We will assume that
total utility for the ith object (U i ) is an additive function of
u(x ij ), i.e.,

U. L: u(x iJ.) (3)
We discuss this assumption in greater detail in step (3). The
reason for introducing it now is to show how it simplifies the esti-
mation of the parameters in (2). By substituting (2) in (3), we get
the following:
k k k
(bJ.x iJ. + a J.) = L: b.x .. + L: a.
L: (4)
j=l j J 1J j J

Note that L: a. is a constant for any ith object. Therefore, if we

j J
desire a rank order of i objects on the basis of total utility, it
is not necessary to know the aj's, i.e.,

L: b.x .. + C where, C constant. (5)

j J 1J

The major problem in estimating the b.' s is that expert

judgment must be used (since there is no dependent variable, regres-
sion procedures are useless). For example, consider that an expert
(or group of experts) was asked to rank a set of attributes in terms
of their "relative importance" to total utility. Judges can do this
easily and with at least a moderate degree of agreement (cf. Ecken-
rode, 1965; evidence to be presented later). The interesting and
important question concerns what these judgments mean in terms of
utility theory. Consider that we judge "rebounding" as more impor-
tant than "assists" in evaluating a center's performance. This could
mean several things: (1) Attributes can only be important if they
have the ability to discriminate between alternatives. This means
that there must be variance in the attribute (the greater the vari-
ance, the greater the ability to discriminate); (2) The change in
total utility when changes must be large (relative to
changes in the other k - 1 attributes). It would seem that both
(1) and (2) are involved in judgments of "relative importance."
We assume that judgments of relative importance are made by
the judge by first adjusting the attributes for the units they are
measured in. Let,
Z. amount of jth attribute measured in units.

Sj standard deviation of the jth attribute (over the i

objects) .
Equation (6) is to be taken as an attempt to quantify the judge's
need to deal with the attributes in comparable form. We now turn to
point (2) above. We define the relative importance of the jth at-
tribute as the change in total utility/change in Zj' relative to
the change in total utility for all Zj' Let,

6U = change in total utility for any ith object


.th units.
tJ.z. = change in J attribute when measured in
Then, the relative importance of the jth attribute is defined as

S. = (tJ.U/tJ.z.)/'i.(tJ.U/tJ.z.) (7)
J J j J
Sj = relative importance of the jth attribute.
Note that the denominator in (7) is constant for any jth attribute,
i. e. ,

'i.(tJ.U/f1z.) K
j J
K = constant.

Sj = tJ.U/f1z.K (8)
We want to express Sj in terms of b. so that we can make use of
equation (5). From (3) vIe know that


i.e., for any change in u(x j ) we get exactly the same change in U.
Therefore, we can substitute f1u(X. ) for f1U in (8),
S. = f1u(x.)/f1z.K = f1u(x.)/[f1(x./s.)]K (10)
We know that f1u(x.) = b.(x· t - x· t ) (where xJ. t denotes adjacent
J J J 1 J 0
values of x j ). Therefore,

[b.(x· t
J J 1
X. t )] / [(x. t
J O Jl
- x. t ) / s . K

(K > b. s.) (11)

- J J

Equation (11) is interesting because it says that relative

importance is a function of both the slope of the utility function,
and the discriminability of the attribute, Note that i f
S· = 0; i.e., if an attribute has no variance, its relative
weight is zero (indeed, such an attribute cannot affect the rank order

of Ui). Also note that,

j J
so that the idea of the relative importance of an attribute is easily
interpreted as a percentage of all importance weights.
We can now solve for bj by using (11),


Substituting this into (5) yields,

~ ~ (S j x i j ) / s j] + C (l3)

Since we are interested in a rank order of Ui' we need not concern

ourselves with the constants K and C. Equation (13) is our basic
formula for computing Ui.
We will investigate two methods for obtaining Sj values, rank-
ing and rating (cf. Eckenrode, 1965). While other methods for obtain-
ing weights have been proposed (e.g., Gardiner 6, Edwards, 1975; Keeney
& Raiffa, 1976), they are relatively more complicated. Furthermore,
in line with our simplification emphasis, we will investigate the use
of equal weights (Dawes and Corrigan, 1974; Einhorn and Hogarth, 1975;
Wainer, 1976). This latter method, of course, does not need the use
of experts.
(3) Combining the u(x ij ). After steps (1) and (2) are com-
pleted, we advocate the use of equation (13). The use of the additive
combining rule has been shown to be a very good approximation to non-
linear rules when the attributes are conditionally monotone with
utility (cf. Yntema and Torgerson, 1961; Dawes and Corrigan, 1974).
This condition is described by Yntema and Torgerson (1961) in discuss-
ing the total "worth" (W) of an object:
. . . there should be a great many practical situations
in which the levels of the factors can be numbered in
such a way that W will be a monotone-increasing func-
tion of any of its subscripts at every value of its other
subscripts. (our emphasis)

Given our assumption about monotonic utility for each attribute, condi-
tional monotonicity says that "more" of an attribute is always better
than less, regardless of the levels of the other attributes. Condi-
tional monotonicity can be violated if there is an interaction (in
ANOVA terms) that involves lines crossing or lines whose slopes have
different signs. In many practical situations it is difficult to
think of examples where more is better than less for attribute xl'
at one level of x z' but less is better than more at a different
level of x z. For example, consider the attributes of price and
quality. One should prefer high quality to low quality at all levels
of price (or, low price to high price at all levels of quality). The
assumption of conditional monotonicity does not guarantee that Ui
will be an additive function of u(x j ). A formal proof of the addi-
tivity of component utilities exists iff the attributes are mutually
preferentially independent (see, Keeney & Raiffa, 1976, theorem 3.6).
While our assumptions do not guarantee an additive utility function,
we feel that they provide a good approximation for most practical
Methodology. Our intent is to use the steps outlined above to
calculate Ui for players in the NBA for the seasons 1973-1974 and
Procedure. At the end of each season, the NBA publishes
"statistics" for every player in the league. The eight attributes we
consider have already been mentioned (FG%, FT%, REB, ASSIS, STS, LPF,
BS, P/M). In order to evaluate player performance, it is necessary
for players to have played a sufficient amount of total time. There-
fore, we consider players who played a minimum of 1,000 minutes (the
need for doing this can be seen by considering that the range of
"minutes played" varies from Z to over 3,000). Our next step was to
examine the roster of each team (17 teams in '73-'74; 18 teams in
'74-'75) and label each player as being either a guard, forward or

center. Where players played more than one position (usually £orward-
guald) , the position that they usually played was used. The reason
for considering three different positions is that it is very difficult
to compare performance across positions since the relative importance
of the various attributes is different for the different positions
(e.g., "rebounding" would seem mUl!h more important for a center than a
guard). Therefore, we seek a rank order of all players in the league
(across teams), within a particular position, for the two seasons. It
should be mentioned that the concern for dealing with each position is
mirrored by the fact that the NBA all-star team is always made up of
2 guards, 2 forwards, and a center (which is the usual makeup of any
team). We will have more to say about the all-star team later in the
Our next step was as follows: For each season, three matri.ces
of data (one for each position) were set up. The rows of each matrix
contained "players" and the columns were the eight attributes. The
entries in each matrix were the ith player's score on the jth
attribute (x ij ). The numbers of players for each position were:
guards--53 and 59 for '73-'74 and '74-'75, respectively; forwards--6l
and 63 for the two seasons; centers--19 and 31.
Once the data were in matrix form, the standard deviation of
each column was calculated (s j) and the entries (x ij ) were divided
by their corresponding Sj' i.e., Xij/S j . This procedure corresponds
to step (2) discussed earlier.
Our next problem was in obtaining estimates of Sj' the rela-
tive importance of the j attributes. Two methods using expert judg-
ment were used (as well as one method using equal weights). Fifty-five
questionnaires were mailed to newspaper sportswriters and television
sports broadcasters. We received 18 usable questionnaires (33%). The
respondents came from such places as the New York Times, Sports Illus-
trated, and the Chicago Tribune. The "experts" were asked to consider

each position and then: (1) to rank order the eight attributes in
terms of how important each is to overall performance (8 was most
important, etc.) and, (2) rate, on a 10 point scale, how important
each attribute is to overall performance. Previous research (Ecken-
rode, 1965) has suggested very little difference in the weights that
are obtained by these two methods. The weight for the jth attribute
was calculated by the following formula:

S. = L R·I L L RpJ' (14)

J p=l PJ p=l j=l
R. rating or ranking of the jth attribute by the pth
We divide by
L L R. in order to normalize the weights (i.e.,
PJp j
make the weights sum to 1.0). The third method we use is to ignore
expert judgment and use equal weights (S. = 11k for all j). The
rationale for using equal weights is given in Einhorn and Hogarth
(1975). Briefly, the power of equal weighting depends on having
positive weights, (i.e., Sj ~ 0). Therefore, one must know the sign
of the weight so that attributes with negative signs (monotonically
decreasing in utility) can be rescaled to be monotonically increasing
(and have a positive sign). This accounts for why "personal fouls"
was rescaled to be "lack of personal fouls."
Once the weights are found, they are used to multiply the
xij/sj variables and the products are summed as indicated in equation
(13). Therefore, the results of the analysis are three orderings of
players (within a position) for each of the two seasons. The three
orderings reflect the three methods of obtaining the weights (Sj)'
It should be noted that the rating and ranking procedure is also
useful in determining if there is inter-judge agreement with respect
to judgments of the relative importance of the attributes.

Our first results concern the weights for the attributes ob-
tained by the rating and ranking methods for the three positions.
This is shown in Table 1.
Table 1


Guards Forwards Centers

Attribute Rate Rank Rate Rank Rate Rank

FG% .179 .198 .171 .201 .151 .157

FT% .145 .134 .135 .108 .112 .077
Reb .067 .062 .164 .193 .200 .221
Assists .181 .198 .092 .074 .106 .111
Steals .157 .157 .079 .069 .043 .049
LP Fouls .109 .088 .104 .093 .111 .102
Points/Min. .112 .125 .142 .168 .105 .111
Blocked Shots .049 .037 .112 .094 .170 .171

Inter-judge * r = W= r = W= r = W=
reliability .565 .752 .403 .622 .591 .657
(18 judges)
p < p < p < p < p < p <
.001 .001 .001 .001 .001 .001

*Note: r is obtained via a one-way repeated ANOVA.

W is Kendall's coefficient of concordance for a set
of ranks.

In comparing the rating and ranking methods, it can be seen that the
two methods yield very similar results. Hmvever, the difference in
importance weights across positions does differ, especially for guards
and centers (e.g., REB). The inter-judge agreement is shown at the
bottom of the table. for both the rating and ranking methods, there
is a moderate degree of agreement (all indices are significant at
p < .001).

Our major results deal with the rank order of players within
each position on the basis of Vi (total utility). In order to

illustrate, Table 2 shows a partial ordering (due to the fact that

there are many players at each position) for the 1974-1975 season
(using the three weighting methods).
Examination of Table 2 leads us to the central issue concerning
SMAUP. How "good" are these results? Clearly, if we had some indepen-
dent "true" measure of player quality, we could evaluate our results
in comparison to this. However, the whole purpose of SMAUP is to pro-
vide us with an overall measure because the "true" measure doesn't
exist. One "solution" (which will be discussed in greater detail in
the next section) is to elicit expert judgment about quality and see
if the SMAUP solution is sufficiently similar. Fortunately, such
expert judgment is at least partially available. At the end of every
season, the NBA asks sportswriters and broadcasters to pick an
"all-star" team, i.e., the best players in the league at their respec-
tive positions. The procedure is as follows: within each city that
has a team in the NBA, media people are asked to pick a first and
second team; i.e., they are to choose 2 guards, 2 forwards, and 1 cen-
ter for the first and second teams. Each player is given points for
making the first or second team. The total points for a player in a
city is calculated as a proportion of that city's vote--which is 1
point per city. A player's total points are the result of summing the
points he has received from all the cities voting. This results in a
rank order of all players regardless of position. The first team is
chosen by picking the 2 g~ards, 2 forwards, and 1 center with the
highest point total. The second team is similarly chosen after play-
ers are picked for the first team. It is to be noted that a player
with a higher point total than another may not make the all-star team;
e.g., the third best center may have a higher point total than the
fourth best guard, yet not make the first or second team.
The way in which the all-star team is chosen, particularly the
manner in which judges must rely on their memory of players'
Table 2

Guards (N 59) Forwards (N 63) Centers (N = 31)

Rank Rate Equal Rank Rate Equal Rank Rate Equal

Frazier* Frazier* Chenier* Barry* Barry* Barry* McAdoo* McAdoo* McAdoo*

Archibald* Archibald* Frazier* Hayes* Hayes* Hayes* Jabbar Jabbar Jabbar
F. Brown F. Brown F. Brown Wicks Wicks Wicks Lanier Lanier Lanier
Smith Chenier* Archibald* Tomjanovich Haywood* Haywood Lacey Lacey Lacey
Chenier* Smith Maravich Haywood* Tomjanovich Havlicek* Unseld Unseld Unseld 0
Steele Steele Monroe Nelson Havlicek* Tomjanovich Cowens* Cowens* Cowens*
Beard White* Bing Hairston Nelson Cunningham Smith Smith Walton
Clark Clark White* Perry Perry Ratleff Ray Walton Ray
Murphy Bing Smith Walker Hairston Perry Walton Ray Smith
White* Beard Snyder Havlicek* Walker Dandridge Chones Chones Awtrey

Note: * player made actual first or second All-Star team.


performances, makes this variable considerably less than a "true"

measure of quality. However, it does provide us with a rough guide
for comparing our SMAUP results. Furthermore, because the NBA will
not release the full ranking of players using the all-star balloting,
the SMAUP procedure can be used to evaluate players that have not made
the team.
In the first column of Table 3, we show the players that were
selected to the first and second all-star teams for the two seasons in
Table 3

Actual Rank Rate Equal

Frazier (IS. 291)-G

Jabbar (14.S41)-C
Jabbar* l1st
Havlicek (12.214)-F Tomjanovich Tomjanovich Tomjanovich
Goodrich (1l.028)-G l1urphy Murphy Chenier Jam
Barry (9.973) -F Hayes* Barry* Barry*
Hayes (9.407) -F
Haywood (8.206)-F
McAdoo (6.S93)-C I1cAdoo* McAdoo* Lanier 2nd
Bing (S.97S)-G Brown Brown Mix team
Van Lier (3.413) -G Allen Allen Murphy J
Barry (16.979)-F
Archibald (lS.089)-G
Chenier* l1st
Hayes (14.292)-F lIayes* Hayes* Hayes'" team
Frazier (12.93l)-G Frazier* Frazier* Frazier*
McAdoo (12.648)-F HcAdoo* McAdoo* McAdoo>~ J
Chenier (9.868)-G
Havlicek (9.S28)-F
Wicks l2nd
Cowens (8.421)-C Jabbar Jabbar Jabbar team
Haywood (4.927)-F Tomjunovich Haywood>'< Haywood*
White (3.997) -G ST'J.ith Brown Brown J
Note: The number in parentheses is the total points for a player in
the all-star balloting. F = forward, G = guard, C = center.
* means that predicted player made either the first or second team.

question. The number in parentheses beside each player is the total

points that player received in the balloting (17 is the maximum

possible in '73-'74; 18 in '74-'75). Furthermore, the player's posi-

tion is indicated by a G, F, or C for guard, forward, and center, re-
spectively. The columns labeled "rank," "rate," and "equal" show the
predicted first and second teams using the SMAUP model with the Sj'S
obtained by each of the above methods. The predicted teams were de-
termined by picking (within each season) the highest Ui values
within each position; i.e., in Tables 2 and 3, pick the first 4 guards,
4 forwards, and 2 centers. The first 2 guards, 2 forwards, and 1 cen-
ter make up our predicted first team, and the remainder the second
team. This is done for each weighting method. We have arranged the
predicted team members so that comparisons to the actual members are
In order to aid in evaluating these results, we have put an
asterisk next to a predicted player that made either the first or
second team. If we define a "success" as any predicted player who
makes either team, one can see that the SMAUP approach does quite well
(11/20 successes for ranking; 13/20 for rating; 12/20 for equal). It
is interesting to note that the models all do better in predicting the
first as opposed to the second team. These results can be compared to
what could be expected on .the basis of chance; i.e., what is the prob-
ability of getting at least r successes when drawing samples of size
N from each of the three categories? Our results are very unlikely
on the basis of chance (p < .001). Although our results are well
beyond chance level, one may object that in order to test how well our
model predicts, one needs alternative "naive" models. This argument
is certainly valid if our sole purpose was to try to predict the all-
star team. However, our purpose is not prediction per see--it is the
ranking of players according to total utility (U i ). Too much emphasis
on the all-star balloting gives this judgment (it is essential to re-
member that this is a judgment) a greater degree of validity than it

perhaps deserves. We address ourselves to this point in the discussion

section to follow.
Our final results deal with a comparison of the three weighting
schemes and are shown in Table 4.
Table 4

Ranking Method
First Second None
/First 7 1 2 10
Second 1 2 7 10

8 3 9 20

Rating Method
First Second None
/First 8 0 2 10
'-Second 0 5 5 10

8 5 7 20

Equal Method
First Second None
/First 7 1 2 10
""'Second 1 3 6 10

8 4 8 20

For each weighting scheme we cross-tabulated the prediction of

whether a player would make the first or second team with the actual

standing of being on the first, second, or no team. For the three


weighting methods, the number of exact hits was 9, 13, and 10 for
ranking, rating, and equal weighting, respectively. However, for the
ranking and equal weighting methods, two "near hits" were also ob-
served. Although there might be several ways to further analyze these
data, the similarity of the results for all three weighting methods
seems to be the major finding.

The basic issue in using any multiattribute procedure is the
evaluation of results. Since these procedures are used in situations
where there is no "ultimate criterion" (Thorndike, 1949) of worth,
judgment of some type is often used for comparison purposes. However,
it must be remembered that the literature on clinical judgment (cf.
Meehl, 1954; Sawyer, 1966) clearly shows that global judgments (which
are clinical combinations of attributes) are quite inaccurate as com-
pared to mechanical combination (Edwards, et al., 1968; Einhorn, 1972).
It would seem that the results found in clinical judgment studies
would be directly applicable to the assigning of "worth" to multi-
attributed alternatives. One study speaks directly to this point.
Yntema and Torgerson (1961) trained subjects in attaching worth to
previously neutral objects--e1lipses that varied in size, shape, and
color. Subjects viewed 180 stimuli for each of 10 consecutive days
and judged the worth of the stimuli. Each stimulus had been given a
true worth by the experimenter according to the sum of the three two-
way interactions of size (i), shape (j), and color (k). Subjects
were given immediate feedback as to the correct answer after each
trial ("correct" being defined by ij + ik + kj). On the 12th day,
subjects again judged worth but were not given feedback. The correla-
tion of the subjects' judged worth and actual worth was .84 (average
over 6 subjects). However, the correlation of true worth with the sum
of i + j + k was .97: (The difference in r 2 ,s is. 71 vs . . 94.)
This means that an equal weighted additive model using three main

effects did considerably better than the clinical judgment of subjects

given 1800 trials with immediate feedback on each trial. These re-
sults are certainly striking and confirm that mechanical combination
will be superior to clinical combination, even in the case of assign-
ing "worth" to objects.
If clinical judgment is deficient, let us consider some of the
factors that may affect the all-star judgment but which would not be
related to true performance. The method by which media people select
the team depends to a large degree on remembering players. Memory
biases, such as "availability" (Tversky and Kahneman, 1974), may
affect who is remembered. Related to this is the fact that some teams
get a great deal more publicity than others (this could be due to the
fact that they win more often and/or they play in cities with strong
basketball traditions--e.g., Boston). Furthermore, there may be
strong "carry-over" effects from one year to the next; i.e., it may be
difficult to evaluate a player's current performance independently
of how he performed in prior years. Finally, it may be that players
on winning teams are perceived to be better than they are while the
reverse may be true for those on losing teams. In this regard it is
interesting to note that in the '73-'74 season, Jabbar was picked as
the first team center (Milwaukee had a very good year), but, in
'74-'75, Milwaukee did not do well and Jabbar did not make either the
first or second team. However, he finished second in our SMAUP
Although the all-star judgment may reflect factors that are
independent of true performance, SMAUP contains a number of assump-
tions that may not be correct. Specifically, it has been assumed that:
(1) Utility functions for the attributes are linear;
(2) The total utility of an object is an additive function of
the utility of the attributes making up that object;

(3) The variables are scaled so that they are monotonically

increasing in utility; i.e., we know the sign of the attributes

a priori;
(4) The /3. IS are the relative importances of the attributes;
(5) We have not left out important attributes;
(6) The attributes are essentially uncorrelated with each
We consider (5) and (6) to be the most serious problems, and
discuss them together since they involve related matters. Dawes and
Corrigan (1974) have pointed to what they consider to be the key issue
in prediction situations--viz., finding the right predictor variables.
This implies that the function form and the weighting problems are
likely to be subsidiary. We feel that this is equally applicable to
multiattribute utility procedures. Therefore, did we leave out im-
portant attributes in evaluating performance? One might argue that we
had too few attributes relating to defense--or that we didn't have
measures of "leadership," "clutch playing," etc. We agree with this
criticism, yet its importance in casting doubt on the model leads us
to consider point (6) above.
Consider that there were many more attributes that could have
been included in the model. However, many of these might be highly
correlated with those we have already chosen. If these were included,
we would be guilty of overcounting an attribute's influence in terms
of its effect on Ui . The reason for this is that the amount of vari-
ance in Ui accounted for by u(x j ) is a function of both the vari-
ance of u(x j ) and its covariance with other attributes. This means
that as more attributes are included in the model, the probability of
including highly correlated ones increases, making it difficult to
determine the "effective ,veight" (Ghiselli, 1964) of any attribute.
Therefore, although there is a danger of having too few attributes in
any model, there is also the danger of having too many. Whether the

eight attributes we have used are sufficient remains problematic.

However, we quote Gardiner and Edwards (1975, p. 17), "As a rule of
thumb, eight dimensions are plenty and 15 are too many." This seems
reasonable to us, especially if the attributes are at least concep-
tually independent.
While there are problems in considering either the all-star
judgment or the model as reflecting "true" performance, there are
several reasons for considering an analytic approach to be superior to

global judgment:
(1) Given the evidence of the superiority of mechanical as
opposed to clinical combining of information, multiattribute utility
procedures should be preferred.
(2) An analytic approach makes the evaluation process explicit.

This includes the defining of attributes, the determination of impor-

tance weights and, a consideration of the utility assumptions.
(3) The evaluation is done systematically. The steps in the
procedure are specified so that the introduction of random error in
judgment (fatigue, boredom, etc.) is minimized. Furthermore, because
the procedure is systematic, it is more easily communicated to others.
Therefore, teaching someone to use multiattribute utility procedures
should be easier than teaching others to make global judgments (assum-
ing that the expert knows--or is able to express--his/her strategies).
(4) Because the procedure is explicit, it shows where experts
may disagree. This may be very useful information for resolving con-
flicts that are primarily due to cognitive differences (cf. Hammond &
Adelman, 1976).
(5)· The use of the additive combining rule implies that both
dominance and transitivity will hold for the Ui values. This is an
important advantage since there is no guarantee that global judgments
will obey these principles (cf. Tversky. 1969).

(6) One very common objection to analytical procedures as

opposed to using judgment is that the former "cost" too much, espe-
cially in terms of time (Grayson, 1973). Our simple procedure tempers
this criticism somewhat since one could perform our analysis without
the use of experts at all (except to pick the attributes). In situa-
tions where experts must measure the attributes because no objective
measurements are available, one cannot avoid the extra cost in time.
However, such data provide a great deal of additional information that
may well justify the added cost (Einhorn, 1974).

Methodological Considerations
Although the advantages of using an analytic procedure rather
than clinical judgment are numerous, several further issues deserve
discussion. We briefly discuss the following: (1) The effects of
the object/alternative set; (2) Subjectively measuring x ij ; (3) In-
corporating "price" in the analysis; (4) When equal weights should not
be used.
(1) Consider that you wish to buy a new car. Do you think of
every possible make, size, and style as being within the relevant set
of alternatives? The answer is most likely, "no." Clearly, one's
resources and goals will delimit the set of all alternatives into a
relevant subset. While it is not clear how this initial delimiting
is done, the consequences of dealing with a selected subset of
alternatives are important. In order to illustrate, let's
assume that you have decided to buy a luxury car. Furthermore, assume
that "comfort" is an important attribute to you. How might the
utility function for comfort look? If one considered the function over
the full range (i.e., over all cars) it might be decidedly nonlinear;
e.g., at low to moderate levels of comfort (subcompacts and compacts),
the function is quite flat, rising quickly in the middle range (mid-
size cars), becoming steeper at the high end (full size and luxury
cars), and then leveling off (extravagant cars). However, the fact

that you are only considering luxury cars has two important implica-
tions: (1) linear utility functions are likely to be excellent ap-
proximations since the relevant range of the attribute is smaller than
the plausible range (unless "plausible" is defined vis-a-vis the set
of alternatives considered); (2) the slope of the utility function and
the standard deviation of the attribute will differ depending on the
set of alternatives being considered. For example, the standard
deviation of comfort for luxury cars is likely to be different than
for compact cars (the slope will also be different). If one recalls
the definition of relative importance of an attribute [equation (11)]
as being a function of both the slope of the utility function and the
attribute standard deviation, it becomes clear that relative importance
must be defined (and elicited) for a particular class or subset of
alternatives. In our study this was done by asking for importance
weights for each of the three positions. Furthermore, it is interest-
ing to note that the same person may have a different set of weights
depending on the set of alternatives that is being considered. Let's
say that you've just been fired and must consider sUbcompact cars
rather than luxury ones. Since the amount of comfort available in
these cars is low and your utility function is quite flat in this
range, this attribute may now be of little importance to you in choos-
(2) In many situations, attributes cannot be objectively
measured. Therefore, experts will be needed to serve as "measuring
instruments." One way to get such measurements is to set up rating
scales and have each expert rate "how much" each alternative contains
of each attribute. Although good agreement between experts in terms
of measuring attributes has been found (Einhorn, 1974), one should be
aware of the various biases that exist when rating methods are used
(Guilford, 1954). Raters can be trained to be aware of these biases
and/or statistical methods can be used to eliminate them (Guilford,

1954). In either case, the average rating (over experts) can then be
used as the value of the alternative on the attribute. Formally, let,

rating of the ith alternative on the jth

attribute by the p th expert (p = 1, 2, ... , r).

Then the rating of the ith alternative on the jth attribute is,
x.. l: x .. /r
1J. p=l 1J

These values now provide the input for the analysis. If the rating
scales being used are comparable (e.g., 7-point scales with anchors
like "poor," "average," "excellent"), there is no need to standardize
the attributes. Therefore, the basic computing formula is (13) with-
out the terms.
In many decisions, price is an important attribute that

must be considered in determining total utility. In order to include

price in multiattribute models, it can be considered as another attri-
bute (although it must be rescaled in SMAVP). One of the difficulties
with this approach is that people find it hard to assign a weight to
price since it usually affects other attributes as well (i.e., one can
get more of an attribute if one is willing to pay more). Several
suggestions have been made to alleviate this problem; e.g., calculate
benefit to cost ratios or benefit minus cost indices. However, it can
be shown that both the benefit/cost and benefic: - cost indices do not
yield a rank order of alternatives that is invariant over values of K
and C. Since these constants are not available (indeed, methods for
determining them have not been developed), one may have a rank order of
Vi/$i or Vi - $i that is quite different from assuming that K = 1
and C O. Therefore, although not entirely satisfactory, including
price as an attribute remains the best procedure.
(4) It has been shown that the correlation between linear com-
posites formed from equal and differential weighting is highest when
the number of attributes is large and the intercorrelation between them

is positive (Einhorn & Hogarth, 1975; Einhorn, 1976). However, as

Edwards (1976) points out, " . . . positive correlations among predic-
tors can only improve the equal-weights approximation. But negative
correlations among predictors can make it worse. And in multi-
attribute utility applications, negative correlations among predictors
. are guaranteed. The reason why is simple. If you are consider-
ing two jobs, and Job A pays better, is more interesting, and requires
less drudgery than Job B, you don't have much of a decision problem.
In technical language, Job A dominates Job B. Only if the predictor
variables are negatively correlated is there a decision problem" (pp.
23-24). There are two responses to this, one statistical and the other
psychological. The statistical argument is based on the fact that for
k attributes, the maximum negative correlation between them (assuming
that the correlation is the same) is -l/(k - 1) (Cohen, 1968).
Therefore, in many cases the negative correlations will not be too
large. However, if k is small (k < 5), equal weighting should not
be used if the attributes are highly negatively correlated (this can
be determined prior to any analysis). The second argument centers on
the basic assumption in the above comment, viz., that the relevant set
will contain no dominated alternatives. This relates back to our
earlier point concerning the delimiting of the set of all alternatives.
We do not believe that people use an elimination-of-dominated-
alternatives strategy, since the cognitive work involved in scanning,
searching, and holding nondominated alternatives in memory would be
extreme. This is not to say that a computer program couldn't be de-
veloped to do this. However, from a normative point of view, the
additive combining rule guarantees that dominated alternatives cannot
be ranked above nondominated ones. Therefore, why go the the trouble?
However, Edwards' point is clearly important if one is considering few
alternatives with few attributes (as in his example).

Although we feel that SMA-UP is a very useful procedure for
evaluation, we would like to stress that each particular situation be
examined with respect to whether the assumptions made here are reason-
able. If they are not (e.g., if conditional monotonicity is not
likely), then the simple method that we have Proposed should not be
used. Of course, an interesting question for further research con-
cerns the sensitivity of results to violations of our assumptions.
However, until more is known about this, a prudent course of action is
While we have discussed a number of technical questions con~

cerning SMAUP, the most important problem in evaluating any multi-

attribute procedure remains, "How does one evaluate the evaluation
procedure?" It is surprising to us that there are so few studies of
the type done by 1ntema and Torgerson (1961), i.e., where true worth

is generated by the experimenters and clinical judgment is compared

with simple mechanical models. More work of this type is called for
since it is likely to answer the crucial questions concerning the
relative effectiveness of multiattribute procedures as compared to
clinical judgment.


M. L. Blum and J. C. Naylor, Industrial Psychology, Harper and Row,

New York, 1968.
J. Cohen, Multiple regression as a general data-analytic system,
Psych. Bull., 70(1968), pp. 426-443.
R. M. Dawes and B. Corrigan, Linear models in decision making, Psych.
Bull., 81(1974), pp. 97-106.
R. T. Eckenrode, Weighting mUltiple criteria, Mgt. Sci., 12(1965),
pp. 180-192.
W. Edwards, Social utilities, The Eng. Econ., Summer Symposium Series,

- - -, Comment on Equal weighting in multiattribute models: A

rationale, an example, and some extensions, by Hillel J.
Einhorn. In M. Schiff and G. Sorter (eds.), Proceedings of the
conference on topical research in accounting, New York Univer-
sity Press, 1976.
W. Edwards, L. D. Phillips, W. L. Hayes, and B. C. Goodman, Prob-
abilistic information processing systems: Design and evalua-
tion, IEEE Trans. on Sys. Sci. and Cyber., SSC-4, 1968,
pp. 248-265.
H. J. Einhorn, Expert measurement and mechanical combination, Org.
Beh. and Human Perf., 7(1972), pp. 86-106.
_ _ _ , Expert judgment: Some necessary conditions and an example,
J. App1. Psych., 59(1974), pp. 562-571.

_______ , Equal weighting in multiattribute models: A rationale, an

example, and some extensions. In M. Schiff and G. Sorter
(eds.), Proceedings of the conference on topical research in
accounting, New York University Press, New York, 1976.
H. J. Einhorn and R. M. Hogarth, Unit weighting schemes for decision
making, Org. Beh. and Human Perf., 13(1975), pp. 171-192.

P. C. Gardiner and W. Edwards, Public values: Multiattribute-utility

measurement for social decision making. In M. F. Kaplan and
S. Schwartz (eds.), Human judgment and decision processes,
Academic Press, New York, 1975.
E. E. Ghiselli, Theory of psychological measurement, McGraw-Hill,
New York, 1964.
C. J. Grayson, Management science and business practice, Harv. Bus.
Rev., (1973), pp. 41-48.
J. P. Guilford, Psychometric methods, McGraw-Hill, New York, 1954.

K. R. Hammond and L. Adelman, Science, values, and human judgment,

Sci., 194(1976), pp. 389-396.
G. P. Huber, Multiattribute utility models: A review of field and
field-like studies, Mgt. Sci., 20(1974), pp. 1393-1402.
R. L. Keeney and H. Raiffa, Decisions with multiple objectives, Wiley,
New York, 1976.
P. E. Meehl, Clinical versus statistical prediction, University of
Minnesota Press, Minneapolis, 1954.
~. F. O'Connor, The application of multiattribute scaling procedures
to the development of indices of water quality, Report 7339,
Center for Mathematical Studies in Business and Economics,
University of Chicago, Chicago, 1973.
J. W. Payne, Task complexity and contingent processing in decision
making: An information search and protocol analysis, Org. Beh.
and Human Perf., 16(1976), pp. 366-387 .
.J. Sawyer, Measurement ;mel. prediction: Clinical and s tatis tical,
Psych. Bull., 66(1966), pp. 178-200.
R. L. Thorndike, Personnel selection, Wiley, New York, 1949.
A. Tversky, Intransitivity of preferences, Psych. Rev., 76(1969),
pp. 31-48.
_______ , Elimination by aspects: A theory of choice, Psych. Rev.,
79(1972), pp. 281-299.

A. Tversky and D. Kahneman, Judgment under uncertainty: Heuristics

and biases, Sci., 185(1974), pp. 1124-1131.
H. Wainer, Estimating coefficients in linear models: It don't make
no nevermind, Psych. Bull., 83(1976), pp. 213-217.
D. B. Yntema and W. S. Torgerson, Man-computer cooperation in deci-
sions requiring common sense, IRE Trans. of the Prof. Group on
Human Factors in Electronics, HFE 2 (1), (1961), pp. 20-26.

GUNTER FANDEL, Fernuniversitat, Hagen / West Germany

This paper presents a mathematical programming algorithm for solving decision
problems under multiple objectives and its application to the practical problem of
resource allocation among the university activities of teaching and research. The
solution of such a problem, which is formally identical with the vector ~
problem, is generated by an interactive discussion process between the decision
maker and a computer as an anonymous partner. In this process the decision maker
is requested to provide under partial information about the set of feasible solutions
an answer for at least one component of any given efficient output (goal) vector,
that he would not accept losses regarding the corresponding actual numerical values.
The method converges as will be demonstrated by a numerical e~le. Though the
existence of a utility function is assumed neither explicitly nor implicitly, the
weights of the output components in the optimum simultaneously determined by the
process can be interpreted as a linear approximation to the utility function .of
the decision maker. Statements on the convergence rapidity of the process are made
in comparison with another numerical example.



Denote by
]N the set of integers,
JR the set of real numbers,
X the set of considered activities,
z :X+:n:f the system of n objective functions, nEll with ~2,
Y = z(X) = Im z (image of z) the set of possible output vectors,
eff(Y) = {yEYlv~AvEY=>v=y} the efficient border of Y, YcJRn .
The vector maximum problem can now be defined as
(1) determine eff(Y).
This demand is often indicated in the literature by "max" {z(x) IxEX} (e.g.
DINKELBACH [1971]). Variations on the efficient border of Y are characterized in
this way that increasing one component is only possible by decreasing other components
(efficiency). This property sets up the connection with the decision problem under
multiple criteria.
In the economical context the necessity to choose one decision alternative forces
to modify the problem to

(1') choose one yE eff(Y).

This formulation leaves it open, by which principle the decision maker is guided
in his choice (question of the solution principle). Once the solution principle is
chosen, the solution has to be computed (question of the solution algorithm). Both
questions are often combined in the literature (SAUERMANN and SELTEN [1962];
GEOFFRION [1965 and 1970]; MARGLIN [1966]; BENAYOUN et al. [1971]; ROY [1971],
p. 250 ff; FANDEL [1972]; BELENSON and KAPUR [1973]; WILHELM [1975], p. 58-80;
The solution algorithms in the literature generally proceed from the following
assumptions on the range Y of z:
(i) n(Y) is convex and closed
(ii) There are elements y and if. of JR n , so that n (Y) c n({y}) and
if. ~ y for all y E eff (Y),
where the operator n is defined by n(Y) :={vEJFf I~y for a yEY} for all Y c JFf.
Condition (ii) expresses, that the efficient border of Y shall be bounded. In the
following these assumptions are always made.
In the next section at first a solution principle will be developed basing on
informations about the components of the goal system, which can easily be obtained
from the decision maker. Then, in the second section, the algorithm will be handled
to determine the optimal solution.


The approach to be described here and firstly developed in FANDEL [1972] starts
from the fact, that the decision maker solving a goal conflict is always confronted
with the problem, that he can only increase the level of any goal component, if he
accepts losses in other components. So for solving the goal conflict the decision
maker will be requested, that he has for any given goal vector a certain idea, in
which component he will not accept further losses under the goal conflict. This
concept can formally be expressed in the following way. The existence of a mapping
8 : JRn .... {1, ••• ,n}
is supposed, which describes the individual decision behavior of the decision maker.
The mapping is to be interpreted by the statement, that for the goal vector vEJFf
the decision maker does not want further reductions of the goal component 8(v).
This interpretation results in the formulation of the solution principle:
(2) S.(X,z)={xEXI8(v)=i<=>
v.~z.(x), vEY}.
1 1
That means, for all nonoptimal activities the optimum shows a higher level just in
this component, in which the decision maker will not accept further losses with
regard to these activities.


The algoritlm for finding out an optimal solution of the vector maximum problem

can be understood as a structured dialogue between the d@cision maker and a computer
as an anonymous partner. Starting from the technical data, e.g. the set Y of feasible
goal combinations, and the information 0 obtained from the decision maker the
computer tends to reduce the set of decision alternatives in accordance with certain
given rules step by step, until the optimal solution is achieved or approximated
adequately. This ensues by the systematical choice of output vectors yE eff(Y),
for which the decision maker has to provide the information 0, and by deriving
from this restriction vectors w.
In addition to the computation of the optimum the algorithm yields at the beginning
of every iteration step informations about the shape of the goal set, which is
generally not completely lmown by the decision maker. On the basis of the goal
conflict the decision maker, maximizing one goal component, is told
- what amounts of all other goal components he will possibly lose;
- what amounts of all other goal components he will get at least.
These informations are suitable to reveal locally the decision struCture described
by 0 with assistance of the decision maker.
The rules of the iteratively used algorithm (FANDEL [1972], p. 57-87; FANDEL [1977],
p. 151):
(Rl) *k
Yk = max {vkIVEYAV~} for all k, k=l, •.. ,n,
M M n *k
(R2) Y y = lin L y ,
'k 'k M
(R3) y Yk = max {vklvEYA~y } for all k,
~ n
(R4) t,c ty = c for all k, t=(t 1 , ... ,t ), L tk=l and cElt
n k=l
(R5) y- ty = max {tvIVEYAv~yM},
(R6) w w·l = y.l <===> 0 (y) = i; initial value w=O,
(R7) the optimal solution by y if the iteration is broken off,
are illustrated for one step in figure 1. Here 0(1)=1 is assumed; that is, the
decision maker will not accept further losses of the first goal component in the
sequel of the decision process.
The algorithm, the rule sequence of which is portrayed by the flow chart in figure 2,
possesses the following properties (FANDEL [1972], p. 80 ff.)
- it generates a series {wS}SElN of restriction vectors, which is monotonously
increasing and bounded and therefore converges to a point yEY,
- Y is an efficient point: yE eff(Y), and for all XESo(X,Z) holds z(x)=y,
- by every iteration step of the process the decision maker gets lmowledge about
2n+l output vectors of eff(Y).

figure 1

If eff(Y) is differentiable, after finding out the optimal solution y the hyperplane
computed by rule 4 in the last iteration step can be interpreted as a locally linear
approximation to a utility function of the decision maker. Then the vector t indicates
the weights of the goal components determined by the decision maker.




The attempt of translating the economic calculation to public activities in
university planning for the purpose of achieving a rational administrated resource
allocation has up to now brought out many theoretical and empirical teaching and
research oriented approaches (ALBACH, PIEPER and SCHULER [19711; BLAUG [19671;
BOWLES [19671; SCHULTZ [19601; Der Bundesminister fUr Bildung und Wissenschaft
[19711; CEI'RON, JVlARI'INO and ROEPCKE [19671; JANTSCH [19671; GEOFFRION, DYER and
FEINBERG (19721). In view Qf the classical tasks of a university - teaching and
research - one should be surprised at the parallel development of those teaching
and research oriented economical models of efficiency, which do not respect the
necessity of integrating the goals of teaching and research into. one concept, except
ALBACH, PIEPER and SCHULER [19711. But practical operationality induces a general
approach, and therefore the vector maximum problem as a theoretical formulation of
decision problems with multiple objectives is examined in its ability to integrate
teaching and research into an undivided concept.
For the simultaneous consideration of teaching and research activities within the
output oriented process of argumentation presented in the last section denote by


compu t e t wlOth y*k

yes I
~-----. ~oPt:=Y

figure 2
flow chart of the algorithm

x =(Xl""'~'~+l, .•• ,~,XN+l) , L<N, the vector of total outputs of an
university investment project, whereby any of these output components shall
be characterized by its input structure (ALBACH, PIEPER and SCHULER [1971],
p. 50 ff. and p. 79 ff.),
= (xl, .•. ,~)T the vector of teaching outputs,
= (xL+1""'xN) the vector of research outputs,
the output component administration,
c = (c 1 , •.. ,cM)T the vector of available resources for the investment
project and
D = (dik ), i=l, •.. ,M and k=l, .•. ,L,L+l, ... ,N,N+l, the matrix of production
coefficients, where a linear technology in the university is assumed; it
B F N+l B F N+l ..
holds D= (D ,D,D ), where D , D and D are the partlal matnces
associated with the teaching, research and administration outputs.
L, M, N > 0 and integers.
The situation to the decision maker: "what amounts of the teaching and research
outputs in planning a university are to realize under simultaneous consideration of
teaching and research" - the output component administration remains of inferior
importance - can now be formally described by the vector maximum problem
(with zk(x)=~, k=l, ..• ,N+l):
(3) " max " x--( xl""'~'~+l""'~'~+l )T
subject to
X= {Dx:5CI
x ;:: 0
This means that the optimal compromise solution of the problem must be chosen of
the set of efficient combinations of total outputs. Applying the presented method
the determination of the unique optimal solution of this vector maximum problem
by the public decision maker is described in the following numerical example.


The basis for the following computations is the example ofan university project
in ALBACH, PIEPER and SCHULER [1971, p. 41 ff.]. By it a faculty of economics is where
considered, which will have 19 activities xk (6 teaching, 12 research activities
and one activity administration) - called main cost processes too - and 19 capacity
restrictions ci • Related to the problem formulation one has L=6, N=18 and M=19.
The following two tables show the several activities and capacities and their
units of measure.

table 1
activities 11c
k = notation unit of measure

I teaching activities
1 student in the lth course of study
2 student in the 2nd course of study
3 student in the 3rd course of study
4 student in the 4th course of study
5 candidates for a doctor's degree
6 practitioner continuing studies
II research activities
7 project professor without assistant subject 1
8 project professor without assistant subject 2
9 project professor without assistant subject 3
10 project professor without assistant subject 4
11 project professor with assistant subject 1
12 project professor with assistant subject 2
13 project professor with assistant subject 3
14 project professor with assistant subject 4
15 project assistant with professor subject 1
16 project assistant with professor subject 2
17 project assistant with professor subject 3
18 project assistant with professor subject 4
19 III faculty administration

comments to table 1:
subject 1 = theoretical economics
subject 2 = business administration
subject 3 = statistics
subject 4 = econometrics / operations research

table 2
capacities ci

i = notation unit of measure

1 ro<?Jll professor in ordinary number of rooms

2 room assistant number of rooms
3 room bureau employee number of rooms
4 conference room number of rooms
5 library work table number of tables
6 lecture-room type 1 (lecture) average time of use in
week hours
7 lecture-room type 2 (seminar/exercise) average time of use in
week hours
8 lecture-room type 3 (group working) average time of use in
week hours
9 salary for professor number of salaries/year
10 salary for assistant number of salaries/year
11 salary for bureau employee number of salaries/year
12 resources type 1 (bureau equipment) standard mixture to the
value of 200 00
13 resources type 2 (telephone) call units up to 1500 r.M
14 resources type 3 (travelling-expenses) expenses of 450 r.M
15 resources type 4 (literature) expenses of 6000 DM
16 computer time minutes/week
17 residence guest-professor 4-roams-residence
18 general administration work year performance of an
average employee
19 library administration work 10% of the year performance
of a librarian

The demand for the several resources by the teaching. research and administration
outputs is given by a matrix D of technical university production coefficients
written down in rows on the following page.
The vector c of capacities will be given by:
The problem to deterndne an optimal destribution of the resources among the several
activities by the outputoriented interactive process then runs as follows
(4) "max" - x-(x1 ••••• x19 )T
subject to

x :0
Dx S c
~ of


0.003652 0.003652 0.018447 0.026529 0.010501 0.013151 0.547945 0.547945 0.547945 0.547945
0.547945 0.547945 0.547945 0.547945 0.082192 0.082192 0.082192 0.082192 1.499999
0.039342 0.047632 0.009868 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.078947 0.078947 0.078947 0.078947 1.052630 1.052630 1.052630 1. 052630 0.131579
0.003652 0.003652 0.018447 0.026529 0.010501 0.013151 0.547945 0.547945 0.547945 0.547945
0.547945 0.547945 0.547945 0.547945 0.082192 0.082192 0.082192 0.082192 2.000000
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1. 000000
0.300000 0.300000 0.300000 0.400000 0.500000 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.004000 0.004000 0.003000 0.001000 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.014000 0.012000 0.014000 0.014000 0.0 0.016000 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.043750 0.062500 0.018750 0.018750 0.012500 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.003652 0.003652 0.018447 0.026529 0.010501 0.013151 0.547945 0.547945 0.547945 0.547945
0.547945 0.547945 0.547945 0.547945 0.082192 0.082192 0.082192 0.082192 0.500000
0.039342 0.047632 0.009868 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.078947 0.078947 0.078947 0.078947 1.052630 1.052630 1.052630 1.052630 0.131579
0.003652 0.003652 0.018447 0.026529 0.010501 0.013151 0.547945 0.547945 0.547945 0.547945
0.547945 0.547945 0.547945 0.547945 0.082192 0.082192 0.082192 0.082192 2.000000
0.018260 0.018260 0.092233 0.132644 0.052507 0.065753 3.739724 3.739724 3.739724 3.739724
3.539723 3.539723 3.539723 3.539723 1.510958 1.510958 1. 510958 1. 510958 10.499990
0.003652 0.003652 0.018447 0.026529 0.010501 0.013151 0.547945 0.547945 0.547945 0.547945
0.547945 0.547945 0.547945 0.547945 0.082192 0.082192 0.082192 0.082192 2.000000
0.003652 0.003652 0.018447 0.026529 0.010501 0.013151 0.547945 0.547945 0.547945 0.547945
0.547945 0.547945 0.547945 0.547945 0.082192 0.082192 0.082192 0.082192 1.000000
0.006652 0.006652 0.021447 0.030529 0.015501 0.013151 0.547945 0.547945 0.547945 0.547945
0.547945 0.547945 0.547945 0.547945 0.082192 0.082192 0.082192 0.082192 0.0
0.0 2.399999 0.0 7.500000 5.000000 0.0 0.0 0.0 0:0 0.0
1.200000 1.200000 1.200000 1.200000 1.599999 1. 599999 1.599999 1.599999 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.500000
0.003652 0.003652 0.018447 0.026529 0.010501 0.013151 0.547945 0.547945 0.547945 0.547945
0.547945 0.547945 0.547945 0.547945 0.082192 0.082192 0.082192 0.082192 2.000000
0.004500 0.004500 0.004500 0.006000 0.007500 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

If the decision maker now will agree with the argumentation process described in
section 2.3., he gets
1) an always improved view over the set of feasible efficient output combinations
by every iterative step S of the procedure,
2) a systematic reduction of this set up to the desired optimal point, and
3) the weights of the output components, which characterize the several offered
solutions and the optimal point itself.
The numerical picture of the decision process is written down on the following
pages. For E was set the value 10; with 19 activities given this choice of E will
correspond to an average tolerance of nearly 0.5 units per component; distributing
the tolerances according to the order of magnitude of the components this E amounts
to a maximum variation of 3%.


On the basis of the computations the following statements will be valid for
the previously discussed situation:
1) rule R1 shows at any moment 19 efficient vectors of total output, where the k-th
vector indicates the maximum amount achievable in the output component k=1, ••• ,19.
These vectors are altered step by step following the intervention of the decision
maker according to rule R6. Because of brevity the rules of the algorithm are
completely written down here only for the first step. So for example the first
vector in rule R1, step 1, means, that under the available resources a maximum
of 342 students in the 1st course of study in the faculty of economics under
consideration can be accepted. The remaining capacities ..lill present the
opportunity to produce 75 practitioners, 27.5 research projects of the type
professor ..nthout assistant and 9 research projects of the type assistant with
professor in the subject theoretical economics simultaneously. Production of
0.5 research projects means, that the resources are sufficient to serve a project
of the length of two time units permanently during the considered time unit. The
other vectors will be understood analogously.
2) the central point xM is situated on the hyperplane generated by D9 (matrix D,
row 9), as pointed out by comparison of required capacities with the available
capacities c; therefore all the maximum vectors of R1 are situated on restriction
D9 too. That means, that all efficient output vectors are fulfilling D9 too, i.e.
this restriction represents the complete solution of the problem. The resource
salary for professor is the only bottleneck-factor of the considered faculty;
increasing it by one unit the other output components ..nIl increase too by nearly
5%, if the output of the faculty administration is held constant.
3) because of statement 2) the output weighting follows from rule R1 and remains
constant for all steps. For control of this statement R4 is again written down
for the 2nd step. The constant output weighting can be explained by the fact,


R 1 342.856934 0.0 0.0 0.0 0.0 75.000229 27.409454
0.0 0.0 0.0 0.0 0.0 0.0 0.0
9.035767 0.0 0.0 0.0 0.0
0.0 240.000000 0.0 0.0 0.0 195.000061 24.921814
0.0 0.0 0.0 0.0 0.0 0.0 0.0
10.989916 0.0 0.0 0.0 0.0
0.0 0.0 428.571289 0.0 9.523899 0.0 15.564435
0.0 0.0 0.0 0.0 0.0 0.0 0.0
17.832321 0.0 0.0 0.0 0.0
284.444092 0.0 0.0 120.000000 0.0 21.111481 23.725159
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 1.000000
144.444504 0.0 0.0 0.0 180.000000 248.611099 21. 558395
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 1.000000
0.0 0.0 0.0 0.0 173.047989 375.000000 16.362167
0.0 0.0 0.0 0.0 0.0 0.0 0.0
21.725021 0.0 0.0 0.0 1. 000000
0.0 0.0 0.0 0.0 0.0 0.0 32.850006
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
32.850006 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 32.850006 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 32.850006 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 32.850006 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 32.850006 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 32.850006 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 32.850006
0.0 0.0 0.0 0.0 173.007965 374.999756 17.256699
0.0 0.0 0.0 0.0 0.0 0.0 0.0
21.850021 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 173.007965 374.999756 17.256699
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 21.850021 0.0 0.0 0.0
0.0 0.0 0.0 0.0 173.007965 374.999756 17.256699
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 21.850021 0.0 0.0
0.0 0.0 0.0 0.0 173.007965 374.999756 17.256699
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 21.850021 0.0
152.981247 0.0 0.0 0.0 174.877655 241.141296 19.377853
0.0 0.0 0.0 0.0 0.0 0.0 0.0
16.007339 0.0 0.0 0.0 1. 000000

R 2 48.669815 12.631578 22.556381 6.315789 64.709488 139.782242 13.199793

1. 728948 1.728948 1.728948 1.728948 1.728948 1.728948 1.728948
5.128441 1.150001 1.150001 1.150001 0.210526



18.210449 12.342628 18.315720 0.210526 60.038361 0.319190 3.473680

4.26899B 17.999924 12.342628 18.315720 125.039368 18.315720 18.105194
18.495056 423.255127 0.105263 18.315720 0.900576


0.00069060 0.00069058 0.00348829 0.00501660 0.00198568 0.00248687 0.10361534

0.10361534 0.10361534 0.10361534 0.10361534 0.10361534 0.10361534 0.10361534
0.01554221 0.01554222 0.01554222 0.01554222 0.09454840


0.003652 0.003652 0.018447 0.026529 0.010501 0.013151 0.547945

0.547945 0.547945 0.547945 0.547945 0.547945 0.547945 0.547945
0.082192 0.082192 0.082192 0.082192 0.500000

R 5 48.669815 12.631578 22.556381 6.315789 64.709488 139.782242 13.199793

1.728948 1.728948 1.728948 1.728948 1.728948 1.728948 1.728948
5.128441 1.150001 1.150001 0.150001 0.210526

R 6 15.000000 12.631578 9.000000 6.315789 3.000000 3.000000 0.250000

0.250000 0.250000 0.250000 0.400000 0.400000 0.400000 0.400000
0.250000 0.250000 0.250000 0.250000 0.100000

EPSILON = 10.000000 DELTA = 746.185059

R 4 0.00069060 0.00069058 0.00348829 0.00501660 0.00198568 0.00248687 0.10361534
0.10361534 0.10361534 0.10361534 0.10361534 0.10361534 0.10361534 0.10361534
0.01554221 0.01554222 0.01554222 0.01554222 0.09454840

R 5 57.866943 23.772110 29.210510 11.956445 60.672638 129.042282 11.852543

1.780851 1.780851 1. 780851 1.930847 1.930848 1.930848 1.930848
4.844055 1.276360 1.276360 1.276360 0.289472

R 6 30.000000 23.772110 20.000000 11. 956445 6.000000 6.000000 0.500000

0.500000 0.500000 0.500000 0.900000 0.900000 0.900000 0.900000
0.500000 0.500000 0.500000 0.500000 0.250000

EPSILON = 10.000000 DELTA = 664.993652

R 5 125.840164 100.226303 80.171539 64.119278 25.234024 25.687561 2.005774
2.005774 2.005774 2.005774 3.498771 3.498771 3.498771 3.498771
2.538501 2.538501 2.538501 2.538501 0.984623
R 6 125.000000 100.000000 80.000000 64.000000 25.000000 25.000000 2.000000
2.000000 2.000000 2.000000 3.498771 3.498771 3.498771 3.498771
2.500000 2.500000 2.500000 2.500000 0.984623
R 8 EPSILON = 10.000000 DEL TA = 10.719031

R 5 125.652435 100.226303 80.134201 64.093323 25.233734 25.479813 2.004518
2.004518 2.004518 2.004518 3.503284 3.503284 3.503284 3.503284
2.530122 2.530122 2.530122 2.530122 0.985432
R 6 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
R 8 EPSILON = 10.000000 DEL TA 9.988041



that restriction D9 describes the complete solution. As a necessary condition

for the optimality of a vector of D9 the quotients of the weights must be the
same as the one of the coefficients, what will be verified by comparing the
vectors in R4. So the rounded up integer optimal solution
x= (126; 100; 80; 64; 25; 25; 2; 2; 2; 2; 3.5; 3.5; 3.5; 3.5; 2.5; 2.5; 2.5; 2.5; 1) T
sufficiently approximated in step 48 is characterized by the same weights. For
example, according to the productivity contribution of the bottlenecK-factor
salary for professor a research project of the type professor with assistant
on business administration should have 20-times the value of a student in the
4th course of study to justify the realized allocation of resources among the
4) the output weighting will hardly change in dependence of the actual bottleneck-
factor, which are possibly in question of describing the set of efficient vectors,
as a comparison of rows 1, 3, 11, 13, 14, 15 and 18 of matrix D with row 9
points out.
5) the optimal solution is well enough approximated in 48 steps for the given
E=10; that amounts to 2.5 steps per component given 19 output components. The
net computing time for the 48 steps came up to 15 minutes (CPU-time), respectively
20 sec./cycle. The value 0 in R8 indicates the actual maximum deviation of the
vectors in R1 from wS, S=1, ... ,48.
Statements on the convergence rapidity of the algorithm were obtained comparing
the discussed numerical example with a reduced one, which results under the aspect
of teaching priority and free teaching capacities from the total model considered
up till now. In this case the total model turns to a partial research oriented
vector maximum problem with 12 objective fUnctions, 12 activities, and 19 restrictions.
The actual same number of Objective fUnctions and activities in the total and partial
model follows from the assumption z(x)=x.
For E=1 in the partial model, what amounts to an average tolerance of nearly 0.08
units and according to the order of magnitude to an error of less than 5% per
component given 12 activities, the desired optimal solution was sufficiently
approximated after 11 steps. This corresponds to one step per component. The net
computing time for the 11 steps came up to 2 minutes, 48 seconds, that is nearly
15 sec./cycle.
Starting from these results one can state, that by rule 2 of the algorithm the
rapidity of convergence will over-proportionally increase with regard to the
iteration steps and the total computing times, if the discussed systems are reduced.
In comparison with that the computing times per cycle do not decrease in the same
way, because in rule 1 and rule 3 of the algorithm as many programs must always be
solved, as objective functions are contained in the system. Table 3 summarizes these
statements on the convergence rapidity of the algorithm for the total and partial

table 3

total model partial model

activities 19 12
capacities 19 19
E= 10 1
tolerance (units/component) 0.5 0.08
variation (in percent) 3 5
iteration steps 48 11
steps/component 2.5 1
CPU-time (in minutes) 15 2.8
CPU-time/step (in seconds) 20 15

The formulation of the problem of efficient planning in university with integrating
teaching and research by the vector maximum problem and the presented argumentation
process for solving this problem have pointed out, that the decision maker receives
within the solving algorithm simultaneously the optimal output combination and
its relative weights t characterizing this solution. Hence one has a direct
evaluation of the teaching, research and administration outputs by prices, which
follow from adjusting the activity-levels to be realized to the available resources.
Beyond that, these prices are suitable to solve the problem of efficient planning
in university with respect to teaching and research areas by a decentralized decision
process as presented in GEOFFRION"[1970].
Moreover, the presented algorithm provides an argument, that the apparently contrary
approaches of demand and profitability models in planning universities can be
combined with each other to one approach characterized by the algorithm. While the
output weighting in the rules R4 and R5 will correspond to the profitability approach,
the demand approach enters the calculus by rule R6. So for each optimal compromise
solution x E eff(X) desired by the public decision maker these instruments of
rational university planning are only two aspects of one and the same problem
solving method.

[ 1] Albach, H., Pieper, H. and SchUler, W.: Hochschulplanung, Bonn 1971.
[ 2] Belenson, S.M. and Kapur, K.C.: An Algorithm for Solving Multicriterion Linear
Programming Problems with Examples, Operational Research Quarterly, Vol. 24,
No.1, 1973, p. 65-77.
[ 3] Benayoun, R., de Montgolfier, J., Tergny, J. and Laritchev, 0.: Linear
Programming with Multiple Objective Functions: Step Method (STEM),
Mathematical Programming, Vol. 1, No.3, 1971, pp. 366-375.

[ 4] Blaug, M.: A Cost-Benefit Approach to Educational Planning in Developing

Countries, World-Banking-Report No. EC - 157, December 20, 1967.
[ 5] Bowles, S.: The Efficient Allocation of Resources in Education, in: Quarterly
Journal of Economics, 1967.
[6] Cetron, M.J., Martino, J. and Roepcke, L.: The Selection of R&D Program
Content - Survey of Quantitative Methods, in: IEEE Transactions on Engineering
Management, Vol. 14, No.1, 1967.
[ 7] Der Bundesminister fUr Bildung und Wissenschaft: Methoden der Prioritats-
bestirnmung I, II und III, Scbriftenreihe Forschungsplanung, Hefte 3, 4 und 5,
Bonn 1971.
[ 8] Dinkelbach, W.: tiber einen Uisungsansatz zurn Vektormax:imumproblem. In:
Unternehmensforschung heute. Hrsg.: M. Beckmann. Berlin-Heidelberg-New York
1971, p. 1-13.
[ 9] Fandel, G.: Optimale Entscheidung bei mebrfacher Zielsetzung, Berlin-Heidelberg-
New York 1972.
~O] Fandel, G.: A Multiple-Objective Programming Algorithm for the Distribution of
Resources arrong Teaching and Research, in: Albach, H. and Bergendahl, G. (eds.):
Production Theory and its Application, Berlin-Heidelberg-New York 1977,
pp. 146-175.
[11] Geoffrion, A.M.: A Parametric Programming Solution to the Vector Maximum Problem,
with Applications to Decisions under Uncertainty. Stanford/California 1965.
[12] Geoffrion, A.M.: Resource Allocation in Decentralized Non-Market Organizations
with Multiple Objectives. Paper presented at the 2nd world congress of the
Econometric Society. Cambridge (England) September 1970.
[13] Geoffrion, A.M., Dyer, J.S. and Feinberg, A.: An Interactive Approach for
Multicriterion Optimization, with an Application to the Operation of an Academic
Department, Management Science, Vol. 19, No.4, 1972, pp. 357-368.
[14] Jantsch, E.: Technological Forecasting in Perspective, Paris 1967.
[15] Marglin, S.A.: Objectives of Water-Resource-Developement: A General Statement,
in: Maass, A. (ed.): Design of Water-Resource-Systems, Cambridge/Mass. 1966.
[161 Roy, B.: Problems and Methods with Multiple Objective Functions (Mathematical
Programming), 1, 1971, pp. 239-266.
[171 sauermann, H. and Selten, R.: Anspruchsanpassungstheorie der Unternehmung, in:
Zeitscbrift fUr die gesamte Staatswissenschaft 118 (1962), pp. 577-597.
~8] Schultz, T.W.: Capital Formation by Education, in: Journal of Political
Economy, 1960.
[19] Wilhelm, J.: Objectives and Multi-Objective Decision Making under Uncertainty,
Berlin-Heidelberg-New York, 1975.
[20] Zionts, S. and Wallenius, J.: An Interactive Programming Method for Solving the
Multiple Criteria Problem, Working Paper 74-10, European Institute for Advanced
Studies in Management, Brussels, 1974.

Peter H. Farquhar
Northwestern University
Evanston, Illinois

This paper examines decision problems where the evaluative cri-
teria are interdependent. After a brief study of interaction effects
in the statistical analysis of multifactor experiments, a classifica-
tion of interactions is developed for mUltiple criteria decision prob-
lems. The classification includes artificial, ordinal, configural,
and holistic interactions. Some interactions are removable by trans-
forming the response scale or restructuring the factors, while other
interactions are not removable and must be considered explicitly. The
interaction effects in various nonadditive utility models are dis-
cussed, and the method of fractional hypercubes is used to interpret
the structural forms of these models.


This paper examines a number of approaches for evaluating deci-

sion alternatives when the criteria are interdependent. Although the
assumption of additivity is often made without much investigation, in
some instances the decision criteria are so interrelated that the
assumption of additivity is quite unrealistic. Conclusions based on
an additive model are therefore misleading and erroneous in these
One approach in dealing with interdependent criteria is to refor-
mulate the decision problem to reduce the interrelationships among
criteria as much as possible. If the residual interactions among cri-
teria are relatively small, then an additive model may yield a satis-
factory approximation. Another approach is to account explicitly for
interactions in the problem analysis. Current research in multifactor
utility theory focuses on identifying whatever sources of interactions
exist and obtaining exact representations of nonadditive utility func-
tions. In comparing these approaches, one must weigh the increased
complexity of this latter approach against the significance of the

decision problem and the potential adequacy of a simpler approximation.

The controversy surrounding the appropriateness of additive or
linear models versus nonadditive or configural models in decision
making has been going on for many years. 1 We cannot settle this con-
troversy here, but we do offer three reasons for considering models
with interdependent criteria. In the first place, there is the strong
intuition expressed by some decision makers:
••. most skilled and knowledgeable clinicians and diag-
nosticians, whose professional lives are in large part
concerned with diagnosis, decision-making, and judgment,
universally reject the linearity principle out of hand.
These people report in fairly emphatic terms that judg-
ment involves a sequential consideration of many dimen-
sions (symptoms, signs, or cues), and that the interpre-
tation of a given dimension is conditional upon the values
of other dimensions. (Hoffman, 1968; p. 61).
In attempting to understand the processes underlying human judg-
ment and decision making, a number of studies have uncovered either
violations of particular independence conditions or the presence of
s~gn~"f"~cant ~nteract~on
" " fe fects. 2 Even though an additive model may
fit some data well, high predictability alone is not enough to explain
the decision processes in such studies.
One must also recognize that often "linearity is contributed by
the analysis, rather than being an inherent property of the data.,,3

1A summary of these issues is found in Slovic and Lichtenstein

(1971). In psychology, see Anderson (1972), Dawes and Corrigan (1974),
Einhorn (1970, 1971), Einhorn and Hogarth (1975), Goldberg (1968, 1969,
1971), Hoffman (1960, 1968), Hoffman et al. (1968), and Wainer (1976).
In management science, see Ashton (1976), Farquhar (1974, 1977a) ,
Green and Carmone (1974), Johnson and Huber (1977), Lisnyk (1977),
MacCrimmon (1973), Moskowitz (1974, 1976), Slovic (1969), Slovic et al
(1972), von Winterfe1dt (1975), and others.

2see Delbeke and Fauville (1974), Einhorn (1970), Farquhar (1974,

1977b), Fischer (1976), Green and Carmone (1974), Green and Devita
(1974, 1975), Hauser and Urban (1976), Hoffman et al. (1968), Sidowski
and Anderson (1967), Slovic (1969), Slovic et al. (1972), Wiggins and
Hoffman (1968), Yntema and Torgerson (1961), among others.

3Green (1968; p. 92).


Many techniques are fairly insensitive to interactions and thus attri-

bute most of the variation in data to additive effects. Such insensi-
tivity has been dramatically demonstrated in a variety of experimental
and practical situations. 4
In contrast to techniques that are primarily statistical, multi-
factor utility methods apparently offer a better means of detecting
departures from additivity and uncovering sources of interdependency
among decision criteria, because utility methods allow for preliminary
tests of underlying independence assumptions. In addition to this
advantage, recent studies have indicated the potential applicability
of nonadditive utility models in R&D planning, where multiple, inter-
related criteria are connnonly found in project selection and resource
allocation decisions;5 in portfolio management, where complementari-
ties frequently exist between items;6 and in numerous other situa-
tions. 7
Simplifying assumptions like additivity can lead to serious
We must be careful not to assume that interactions
do not exist when they actually do. In studies of various
food additives, these substances are studied individually
and found to be safe, but what about the interactions of
various of these additives when they are used together?
••• The variables interacting to produce California smog
interact in complex ways. Similar connnents apply to the

4For example, see Anscombe (1973), Fischer (1972, pp. 41-42),

Green (1968, pp. 92-93), and Yntema and Torgerson (1961). Many sta-
tistical texts on multiple regression, analysis of variance, and
experimental design also discuss problems in discriminating between
linear and nonlinear effects.

5See Baker and Freeland (1975).

6See Farquhar (1974), Farquhar and Rao (1976), Green and Devita
(1974, 1975), and Green et al. (1972).

7See the references and examples in Farquhar (1977a), Johnson

and Huber (1977), and Keeney and Raiffa (1976).

pollutants from power plants. The problem of interactions

and relationships among variables is a very serious one and
one that should be studied carefully. (Marcus-Roberts,
1976; pp. 269-270).
Similarly, many other situations could benefit from an explicit con-
sideration of interaction effects.
The remainder of the paper is organized as follows. Section 2
reviews the statistical analysis of multifactor experiments. Sec-
tion 3 draws on the statistical concepts of interaction to define,
classify, and interpret interdependencies in multiple criteria
decision problems. Section 4 explains how these ideas relate to
value independence, utility independence, fractional hypercubes, and
other topics in multifactor utility theory.


The concept of interaction has received careful and extensive

study in the design of multifactor experiments in the field of statis-
tics. We examine several topics in multifactor experimentation that
have particular relevance to our study of interdependent criteria in
uti1ityana1ysis. 8
For the purpose of illustration, consider an experiment on dia-
mond appraisal in which the evaluative criteria are (1) clarity,
(2) color, and (3) size. Diamonds are typically represented by five
levels of clarity (B, A, AA, AAA, and AAAA) which are rigorously
defined with respect to brilliance, transparency, inclusions, impu-
rities, and other traits. Color is often described with as many as
twenty levels between yellow and white. Size is measured by the
weight of a diamond in carats. For convenience, we consider two

See also Cochran and Cox (1957), Cox (1958), Davies (1967) John
(1971), Keppel (1973), Winer (1971), and others, for further inf~rma­
tion on the design and analysis of mu1tifactor experiments.

levels of each factor as follows:

Factors: A clarity B color c size

Levels: (low) o grade A 0 yellow o 0.40 carats

(high) 1 grade AAA 1 white 1 0.70 carats.

Figure 1 displays the effects of these factors on the estimated price

of diamonds in a hypothetical experiment.

B (color)

A80 $200
I}---~ A (clarity)

C (size)

Figure 1: A 2 3 factorial experiment on diamond appraisal

When n factors are considered, each with two levels, the experi-
ments are called 2 factorial designs. Such experiments are compara-
tively easy to analyze because there is no distinction between quali-
tative and quantitative factors.
Let Yijk denote the response associated with the combination
(i, j, k) where i, j, k E [0, l} are levels of factors A, B, and C
respectively. A dot " JI in lieu of a subscript in Yijk denotes the

quantity obtained by sununing responses over the range of the displaced

subscript. Thus , y ij· -- Yij 1 + y ij 0 • A star "*" in place of a sub-
script denotes the difference between responses at levelland level 0
of the corresponding subscript. For example, Yij* = Yijl - YijO'

Y**k = Yllk - Y10k - YOlk + YOOk' and Y.*k = Yllk - Y10k + YOlk - YOOk·
A bar "_" over any quantity denotes the average obtained by dividing
the quantity by the number of its summands. In 2n factorial designs,
a bar over a quantity involving a total of r dots and stars indicates
division by 2r : Yij. ~Yij. and Y-*k = b.*k. This notation gener-
alizes to n factors and provides compact representations of factorial
effects and interactions.
Main e ffec ts
One can analyze a multifactor experiment in terms of the separate
effects of each factor and the joint effects of their interactions on
the response variable. For instance, a simple estimate of the effect
of factor A is the change in response resulting from a change in the
level of A while other factors are fixed. When A has two levels, the
estimated main effect of A9 is half the difference between the average
of responses with factor A at levelland the average with A at level
0, thus

( 1)

Similarly, the estimated main effects of Band C are defined by

Y··l - - ) =-
Y•• o

B Y.*. and ~(-

2 Y.. * (2)

90ur definitions of main effects and interactions agree with

those for general factorial experiments, though it is conventional in
2n factorial experiments to define main effects and interactions as
twice the value given here. See Scheffe (1959, p. 121).

When factors behave independently, the main effects are meaning-
ful quantities. However if a factor's effect on the response variable
depends markedly on the level at which another factor is fixed, then
the two factors are said to interact and the main effects alone do
not fully describe their joint effect. If A and B each have two
levels, for example, the estimated AB interaction is defined as half
the difference between the average effect of A with B at levelland
that with B at level 0,


If A and B are independent, then the average effects of A at different

levels of B measure the same quantity. Consequently, their estimated
difference should be close to zero if these factors do not interact.
Our definition of the AB interaction is symmetric with respect to
A and B, since the estimated AB interaction is also half the differ-
ence between the average effect of B with A at levelland that with
A at level 0,



The estimates of the AC and BC interactions are defined analogously by


AC and BC (4)

In addition to the main effects and two-factor interactions, we

can consider the joint effect of all three factors. The estimated
ABC interaction is defined as half the difference between the AB
interaction at level 1 of C and at level 0 of C,

ABC (5)

The ABC interaction is symmetrical with respect to all three factors,

so it can also be obtained as either half the change in AC when B is
changed or half the change in BC when A is changed.
Figure 2 provides a geometric illustration of the definitions for
. . .~n 23 f actor~a
main e ff ects an d ~nteract~ons . 1 d es~gns.
. Figure 3 gives
an algebraic description of the estimated effects as comparisons among
the responses of the 23 combinations. For example, the AB interaction
estimate is computed by adding or subtracting the responses as indi-
cated by the signs in the AB row and then dividing the total by the
number of responses,

AB = (Yooo - YlOO - YOlO + YllO + YOOI - YlOl - YOll + Ylll)/8 (6)

The table of signs in Figure 3 can easily be extended to an arbitrary

number of factors, because the effects and combinations appear in
standard order (see Davies (1967; chapters 7 and 8».
An informal study of the computations in Figure 3 for the diamond
appraisal experiment reveals that the separate factors A, B, and C,
and the AB and AC interactions apparently have the most influence on
responses. The BC and ABC interactions seem negligible. One observes
that clarity is more important than either color or size alone,
because it discriminates between gems and other grades of diamonds.
When a diamond has excellent clarity, then color matters. If clarity
is fair, the diamond is usually cut into smaller stones for industrial
and other uses, so color is then unimportant compared to size. Fur-
ther interpretations can also be given.
General factorial designs
In multifactor experiments with two or more levels per factor,
the definition of factorial effects depends on the types of factors

Main effects



Two-factor interactions




Three-factor interaction


Figure 2: An illustration of main effects and interactions in 2 3

factorial designs


000 001 estimated

100 010 110 101 011 111
Mean + + + + + + + + $263.125

A - + - + - + - + 149.375

B - - + + - - + + 70.625

""o'" AB + - - + + - - + 66.875

C - - - + + + + 71.875

AC + - + - - + - + 90.625

BC + + - - - - + + 19.375

ABC - + + - + - - + 18.125

responses $80 200 85 400 140 350 150 700 (divisor = 8)

Figure 3: Table of signs for calculating factorial effects

present, the interpretations of levels, and the nature of the experi-

ment. One common situation involves three qualitative factors with I,
J, and K levels, respectively, and no special distinction among the
various levels. The effects are derived from the following identity,

+ ~i" + "iL. j . + ~"k - "iL ... ), (7a)

For i E (0, 1, " ' , I - l}, j E [0, 1, ... , J - l}, and

k E [0, 1, ••• , K - 11, where ~ijk is the expected value of the
response Yijk' We define the factorial effects below by equating them
with the corresponding terros in (7a),

~ijk (7b)

Further discussion of this definition is found under statistical

analysis of variance and fixed effect models (see Scheffe (1959».
The above definitions of factorial effects are analogous to the
estimates made in 2 3 factorial experiments, where the responses Yijk
in Figure 3 are now replaced by their expected values ~ijk' The
levels of the factors determine the signs of the effects as follows,

Ai (_l)i+l A, Bj (_l)j+l B, Ck (_l)k+l C,

AB .. (_l)i+j AB, AC ik (_l)i+k AC, BC jk (_l)j+k BC,


and ABC ijk (_l)i+j+k+l ABC, (8)

for i, j, k E [0, I}. In 2n factorial designs, we need (plus or

minus) one constant for each factorial effect. In more general
factorial designs, however, the effects in (la, b) may differ for each
subset of levels, so the interpretation of interactions may become
Plotting methods
The plots in Figure 4 illustrate one method of displaying two
factors to reveal interactions. Each plot contains a set of response
curves over one factor, where each curve is conditioned on a level
from a second factor o Additivity or zero interaction is represented
by a set of parallel curves. Figure 4(i) shows the appraised price
for five levels of clarity (factor A), conditioned on the two levels
of color (factor B), while size (factor C) is held constant in all
observations. The strong AB interaction is illustrated by the
increaSing difference between the two curves as clarity improves.
Figure 4(ii) plots the appraised price against size, conditioned on
the two levels of color. The weak BC interaction is due primarily to
the "floor effect" of size near 0.25 carats. If size is limited to

-' -' white
.... color
(size = 0.70 carats)



$200 I __ ..e

!---.... ---- ..... ------- . . .- --- yellow


(i) AB interaction CLARITY


(clarity = AA)
white color


yellow color


o 0.20 0.40 0.60 0.80 1.00
(ii) BC interaction

Figure 4: Some plots of two-factor interactions


0.75 carats or more, the two curves in Figure 4(ii) are parallel and
the effects of color and size are additive.
Keppel (1973, pp. 174-185, 256-262) provides further information
on investigating main effects and interactions with plots of the
experimental data. He considers eight different plots of two-factor
experiments to illustrate all examples of the presence or absence of
the main effects and interaction. He also points out some difficul-
ties in detecting three-factor interactions in plots where two-factor
interactions are present. Winer (1971, pp. 351-359) offers a number
of ways of overcoming these difficulties with either two- or three-
dimensional displays. Cox (1958, pp. 103-108), on the other hand,
discusses the advantages of displaying the residuals (Le., removing
the main effects from the data) to detect interactions among different
Other topics
When an experiment involves a quantitative factor with more than
two levels present, it is desirable to modify the response model in
(7) to account for the functional relationship between the factor

levels and the response variable. The analysis of response surfaces,

trend components, and other topics in mu1tifactor experimentation is
discussed by Davies (1967; chapters 8 and 11), Myers (1971), and
This section concludes with a brief description of fractional
factorial designs, or fractional replicates. The purpose of frac-
tional replication is to obtain estimates of the main effects and
certain interactions with fewer observations than required by a com-
plete fac torial. The idea is to confound higher-order interac tions,
which are of little importance and need not be estimated, with them-
selves and perhaps with other effects that will be estimated. The
effects confounded with a given effect are called its aliases.

For example, one fractional replicate of a 2 3 design uses only

four combinations (100,010,001, lllJ. Assuming all interactions
are zero, the following estimates of main effects are made,

C (9)

If one examines only the 100, 010, 001, and 111 columns in Figure 3,
the estimates for main effects in (9) are confounded (or confused)
with estimates for two-factor interactions: A= BC, B = ic, and
= AB.

C Thus, A and BC are aliases in that we cannot estimate one

without the other--the expected value of the first comparison in (9)
is A + BC. Later on, we consider the relationship between independ-
ence conditions in utility theory and fractional replication in fac-
torial experiments.


This section develops a: tentative classification of interaction

effects in multiple criteria decision problems. Four categories are
presented: (1) artificial, (2) ordinal, (3) configural, and
(4) holistic interactions.
We denote the collection of outcomes in a decision problem by the
outcome space X. Each outcome is judged on n distinct criteria, which
are associated with specific goals and objectives of the decision
maker. For i EN'" [1, 2, ..• , nl, the set of possible performance
levels on the i-th criterion are given by Xi. We refer to
Xl' X2 ' ..• , Xn as factors to be consistent with statistical usage.
We assume that X = Xl x X2 x ••• x Xn , so each outcome x E X is

expressible as x = (xl' x 2 ' ••• , xn) where xi E Xi for i E N.

Risky decision problems are characterized by uncertainty about
the future outcome of a selected decision alternative. We assume then
that each alternative is represented by a simple probability distri-
bution over X, that is, a distribution which assigns probability one
to a finite set of outcomes. Thus we can express a decision alterna-
1 2
tive by the lottery p = Q'lx + Q'2 x +
+ Q'mx , where outcome x j has
probability aj for j = 1, 2, ••• , m and 6 j Q'j = 1. Let P denote the
decision space consisting of all simple probability distributions over
the outcome space X.
The decision maker presumably has a preference relation > over P
which is a strict weak order, that is, > is an asymmetric and nega-
tively transitive relation on P. Suppose I and Y partition N, so X
can be loosely written as XI x XI. Then let PI denote the set of all
simple probability distributions over XI. For fixed ~ E XI and
PI E PI' (PI' XI) denotes that alternative in P which has marginal
probability PIon XI and assigns probability one to XI on Xy. The
conditional preference order >y induced on PI by the preference order
> on P and a fixed element y in XI is defined by

PI >y qI if and only if (PI' y) > (qI' y). (10)

where PI' qI E PI·

Conditional orders are useful in discussing different types of
interactions and in understanding independence assumptions for various
mu1tifactor utility mbde1s.
Category 1: Artificial interactions
Some interactions lack substantive meaning in the decision pro-
cess because they are induced by the particular analytic procedure
used. These artificial interactions are nuisance effects which are
typically ignored or removed. However, not all artificial inter-
actions are recognized as such in practice.

For example, the analysis of the diamond appraisal experiment in

Section 2 presumed the linear response model in (7). The estimated
effects in Figure 3 indicated the AB and AC interactions were possibly
significant, while the BC and ABC interactions were not. On the other
hand if the underlying response model is assumed to be multiplicative,
an appropriate response measure is log(price) rather than price. This
model provides a much better fit to the data and gives zero estimates
for the AC, BC, and ABC interactions. Thus the AC interaction esti-
mate can be removed by appropriate rescaling of the response measure,
though the AB interaction estimate is significant in either case.
Anderson (1969, 1970, 1972) deals with similar issues in func-
tional measurement, which is concerned with the dual problem of estab-
lishing quantitative laws of behavior and valid measurement scales.
Functional measurement employs factorial designs to detect possible
interactions and then applies monotone transformations to the response
scale to determine if any interactions in the behavioral model can be
The study of transformations to remove interaction effects is a
familiar one in statistics and psychology. Ramsay (1977, p. 86)
suggests three classes of transformation problems: (1) transformation
of the response variable with all the factors fixed, (2) transforma-
tion of one or more of the factors with the response variable fixed,
and (3) transformation of the response variable and one or more of the
Transformations of the response variable alone have received
foremost attention. Tukey (1949), Cox (1958, pp. 105-106), and wtner
(1971, pp. 449-452) describe the extent to which interactions depend
on the choice of a response scale. Scheffe (1959, pp. 94-98) examines
the problem of transforming the response scale to eliminate a known
interaction in two-factor experiments. In most cases, however, the
functional forms of the interactions are unknown, so the selection of

transformations is based on an examination of interaction plots, an

analysis of residuals, and other aids. lO Bogartz and Wackwitz (1971)
present a general procedure for obtaining a polynomial transformation
of the response scale which minimizes selected sources of interaction
in a factorial experiment. Although their procedure does not guar-
antee the transformation is monotonic, it is nonetheless a powerful
tool for annihilating interactions. As Bogartz and Wackwitz (1971,
p. 441) say, "A wave of the wand and evidence against additivity dis-
appears without even a poo£."
On the other hand, several authors consider monotonic rescaling
of the response variable to attain additivity. Ramsay (1977) dis-
cusses several monotonic weighted power transformations for each of
the three classes above. His approach is based on a quadratic pro-
gramming algorithm to optimize the fit to an additive model. Kruska1
(1965) describes a monotonic analysis of variance procedure (MONANOVA)
that rescales ordinal responses to minimize departures from additi-
vity. This method is iterative in nature and does not produce an
explicit form for the transformation. Green (1973), Green and Carmone
(1974), and others have applied MONANOVA to a variety of decision
problems in marketing. Further background on obtaining additive
representations is found in Krantz et al. (1971, chapters 6, 9) and
Green and Wind (1973, chapter 4).
Another class involves transformations on one or more factors
while the response variable is fixed. If the functional form of an

interaction is not too complicated and is known from either past

experience or theoretical considerations, it is often desirable to
reformulate the factors and remove the interaction. One of the new
factors can explicitly represent the known interaction and thus

lOSee Anscombe (1961, 1973), Anscombe and Tukey (1963), Box and
Cox (1964), Cox (1958), Davies (1967), and others.

provide a simpler response model. Keeney and Raiffa (1976, pp. 256-
257) give the following example,
Let Y and Z designate measures of the crime rate in
the two sections of a city, respectively • ••• The relative
ordering of lotteries for criminal activity in one section
may depend very much for political reasons on the level of
crime in the other section. However, suppose we define
S '" (Y + Z) /2 and T '" IY - Z \. Then S may be interpreted
as some kind of an average crime index for the city and T
is an indicator of the balance of criminal activity between
the two sections. Although there may be no simplifying
preference assumptions in Y x Z space, such properties may
exist in S x T space.
The YZ interaction is the regarded as artificial because it is
unnecessary. The new factors Sand T provide a clear understanding
of how one might utilize the criminal activity data. This simple
structure is ignored in the original formulation with Y and Z factors.
If the functional form of the interaction is not known in ad-
vance, then one has little basis for reformulating the factors. One
can attempt various transformations, but unless the new factors are
easy to interpret or the resulting model is additive, there does not
seem to be much practical advantage in this effort.
Bogartz and Wackwitz (1970) list three common sources of artifi-
cia1 interactions: (1) floor and ceiling effects, (2) rating biases,
and (3) intervening variables. Figure 4(ii) illustrates an inter-
action between color and size which is attributable to the floor
effect of size on the response. The use of logarithmic and exponen-
tial transformations on the response scale often reduces floor and
ceiling effects. Rating biases occur, for example, when a subject
exhibits an aversion for the ends of a rating scale or anchors his
ratings on a particular part of the scale. Such biases can be diffi-
cult to remove with transformations. When an intervening variable
between the factors and the response is overlooked, the interactions
among factors are considered artificial because a more parsimonious
description of the decision process exists. There is no reason to use
interdependent factors like "left shoes" and "right shoes" if a model

based on "pairs of shoes" can eliminate this interaction.

Category 2: Ordinal interactions
A common method of displaying interactions in two- or three-
factor experiments is to plot a set of response curves for one factor
conditioned on different levels of the remaining factors. If the
response curves are parallel, then no interaction is present. If the
response curves are not parallel but the ranking of responses asso-
ciated with levels of one factor is the same for all conditional
levels of the other factors, the interaction is said to be ordinal.
A more precise definition follows from the notion of preference

Definition 1: XI is preference independent of XI' denoted

XI(PI), if and only if, >y = >z on XI for all y, z E Xy.

For example, Xi(PI) implies that the preference ranking for levels in
Xi is unaffected by changes in the levels of the remaining factors.
With minor structural assumptions, one can show that XI(PI) for all
I N is necessary and sufficient for an additive representation of
preferences over sure outcomes. For n ;;, 3, we have

Definition 2: The factors Xl' ••• , ~ have an ordinal inter-

action if and only if, Xi (PI) for all i E N and not XI(PI) for some
I c: N. l2

llAdams and Fagot (1959), Debreu (1960), Fishburn (1966,1970),

Gorman (1968), Keeney and Raiffa (1976), Krantz et al. (1971), Luce
and Tukey (1964), and many others discuss riskless, additive models.
When n = 2, further assumptions are needed to guarantee the exist-
ence of an additive representation. See Krantz et al. (1971,
section 6.2).

l2If XI(PI) for all I c: Nand n ~ 3, then the interaction can be

removed by a monotonic rescaling of the response variable. In most
cases, the interaction is therefore regarded as artificial, though
there are exceptions (e.g., see Anderson (1970, p. 164».

The individual preference independence assumption, Xi(PI) for all

i E N, implies that the qualitative interpretation of main effects is
unaffected by interactions. 13 In Figure 1, for example, higher levels
are preferred to lower levels for each factor, regardless of the other
factor levels--improved clarity, better color, and larger size always
mean a higher price. Even though color x size is not preference inde-
pendent of clarity and hence an ordinal interaction exists, it makes
no qualitative difference on the main effects.
If in addition to the assumption Xi (PI) for all i E N the levels
of each factor are monotonically increasing in preference, as in the
example above, then the factors are conditionally monotone. Several
researchers have shown that additive models give excellent approxima-
tions in practice when the factors are conditionally monotone. 14
The interpretation of ordinal interactions in utility theory is
somewhat more complicated, but maintains some important similarities
with the present discussion.
Category 3: Configural interactions
If the ranking of responses associated with the levels of one
factor is not the same for all levels of the other factors, the inter-
action is not ordinal. Such interactions are called configural,

Definition 3: The factors Xl' ••. , Xn have a configural inter-

action if Xi is not preference independent of XI" for at least one
i E N.

l3See Cox (1958), Lindquist (1953), Lubin (1961), and Scheffe


l4See Dawes and Corrigan (1974), Slovic and Lichtenstein (1971),

Yntema and Torgerson (1961), and the references cited therein.

Figure 5 illustrates the difference between ordinal and config-

ural interactions in a plot of preference responses for various
entree-vegetable combinations. In comparing the response curves for
steak and chicken, we note that the interaction between this subset
of entrees and vegetables is ordinal since for x E (steak, chicken},

(x, potato) > (x, salad) > (x, corn) > (x, peas), (11)

and for y E (potato, salad, corn, peas},

(steak, y) > (chicken, y). (12)

Even though these two responses curves are not parallel in Figure 5,
the ordinal nature of the interaction allows us to answer questions
about the main effect of each factor. For example, which has a
greater effect on the response, potato or salad? When configural



...- - - -.... chicken

potato salad corn peas


interactions are present, such questions about main effects may have

no answer.
If we now compare lobster with the other entrees, we note that
the conditional preference order over vegetables is not the same as
(11), because (lobster, salad) >- (lobster, potato) in Figure 5. This
crossover effect produces a configura1 interaction between entrees and
vegetables. Unlike ordinal interactions, configura1 interactions can-
not be removed by monotonically rescaling the responses, though it is
possible in some cases to transform or redefine the factors to remove
configura1 interactions.
Configura1 interactions are not uncommon in practice. Instances
of configura1 interactions appear in the evaluation of poultry stock
(Farquhar, 1977b), preferences for food item combinations (Green and
Devita, 1974, 1975), computer performance of time-sharing systems
(Grochow, 1972), diagnoses of ulcer malignancies (Hoffman, Slovic, and
Rorer, 1968), attractiveness of stocks (Slovic, 1969), learning speed
of linguistic associations (Wallace and Underwood, 1964), and hetero-
sexual somatic preferences (Wiggins and Wiggins, 1969), to name a few.
Farquhar and Rao (1976) examine configural interactions resulting from
balance or complementarity in evaluating subsets of mu1tiattributed
items. Keeney and Raiffa (1976) cite several cases where one or more
factors are not preference independent, so configural interactions are
present. Additional examples appear in Farquhar (1974) and Lisnyk
(1977) •
The presence of configura1 interactions sometimes indicates there
is a more fundamental explanation of response than provided by the
original factors themselves.
For example, the factors pressure and temperature
may affect the yield of a chemical reaction only through
that combination of pressure and temperature that deter-
mines the frequency of molecular collisions. In fact,
the physical sciences contain many examples of systems

in which some observation on the system depends only on

a simple combination of factor values. (Cox, 1958;
pp. 122-123).
One ordinarily searches for the reasons behind a configural inter-
action to obtain a simpler formulation of the problem. Lubin (1961)
makes it quite clear, however, that one cannot deal with configural
interactions by ignoring them.
One can generalize the notion of preference indepe'ndence to allow
for complete reversals of conditional preference and for complete

Definition 4: XI is generalized preference independent of XI'

denoted XI(GPI), if and only if, there exists a nonempty >0 on XI
such that for all y E XI' >y E {>o' >~, 01.

We note that >* is the dual order of > defined by p >*q iff q > p for
p, q E P. Also, > =0 indicates complete indifference, p ~ q for all
p, q E P. Generalized preference independence is defined by restrict-
ing conditional preference orders on PI to XI" Examples of preference
reversals are given in Fishburn (1974), Fishburn and Keeney (1974,
1975), and Krantz et al. (1971, pp. 329-339); the latter refers to
generalized preference independence as sign dependence.
An extension of sign dependence is illustrated in an example of
preferences for entrees and table wines. Suppose the level~ of factor
A are various table wines available at a particular restaurant. Fur-
ther, suppose the levels of factor B are the entrees available, which
we assume are partitioned into the following classes: Bl (red meats),
B2 (white meats), B3 (poultry), and B4 (seafood). The conditional
preferences for wines in A are presumably the same for entrees within
a class Bj' but vary across classes for j = 1, ••• , 4. Thus wines are
class dependent on entrees. If there are only two classes and the

are currently investigating.class dependent structures in utility

theory. Somewhat related work appears in vector preference models in
multidimensional scaling. 15
In Section 4, we examine interaction concepts in utility analysis
and discuss several places where configural interactions appear.
Category 4: Holistic interactions
The multiple criteria approach to problem solving is not very
productive if nearly every set of factors exhibits configural inter-
actions with a high degree of interdependence. In this case, the
interaction among factors is holistic since there is no practical
advantage in maintaining a factorial structure--the outcomes are
essentially indecomposable.
For example, in analyzing the properties of certain chemical
compounds, one might choose a natural set of factors, the elements.
Unfortunately one cannot explain the properties of chemical compounds
very well from the separate properties of their constituent elements.
The interactions that exist between the elements in each compound so
dominate other effects that there is no simple structure in a facto-
rial description. Other problems also exhibit inextricably dependent
Strauch (1974) provides further insights into the limitations of
multiple: criteria problem solving when holistic interactions are


In addition to assuming that the preference relation >- on P is a

strict weak order, we assume that the von Neumann-Morgenstern (1947)

l5 See Green and Carmone (1970, chapter 4) for additional refer-

ences, and Green and Devita (1974, 1975) for a specific application.

axioms are satisfied. This guarantees the existence of a real-valued

function u on X such that for all p, q E P,

p > q if and only if ~ p(x)u(x) > 6 q(x)u(x), (13)


where p(x) denotes the probability of outcome x occurring under deci-

sion p. The function u is called a utility function for the prefer-
ence order> on P.
It can be shown that if u and v are two utility functions satis-
fying (13), then there exist real numbers a and 13 with 13 > ° such that
vex) = a + l3u(x) for all x E X. Thus a utility function is unique up
to positive linear transformations.
The direct assessment of a multifactor utility function is
usually a difficult task, so researchers have developed procedures
which split the assessment into several manageable subtasks. The
decomposition approach in multi factor utility theory employs indepen-
dence conditions, which specify various properties for preference
orders on P. If these properties are verified empirically, the theory
shows how the utility function is determined by several functions
involving fewer factors. Typically this collection of functions is
much easier to assess than the original n-factor utility function
itself, so the decomposition approach greatly simplifies the assess-
Additive independence
There are several ways of deriving an additive model. The
following is motivated in part by our earlier discussion of two-factor
interactions in Zn factorial designs,

Definition 5: The factors Xl' .•. , Xn additive independent if

and only if, for all distinct i, j EN, and all x i O, XiI E Xi' xj O,
Xj E Xj , Y E XIJ'

1 0 0 1 1 1 O. (14)
u(x i ,Xj ,y) - u(x i ,Xj ,y) + u(xi ,Xj ,y)

The factors Xl' .•. , Xn are interdependent if they are not additive

When X = Xl x ••• x Xn , additive independence is necessary and

sufficient for an additive utility decomposition,16


For convenience, we assume that there exist x0 = (xl 0 , ...

, xn0 ) and
x 1 = (xl 1 , ...
, xn1 ) in X such that u(x O) 0, u(x 1 ) = 1. Also, let
ui(x i ) = 0 and u.~ (x.~ 1) 1, where u.~ (x.)
for i E N is a utility func-
tion for the preference order >i on Pi conditioned on the element
o 0 0 0
(xl' ••• , x i _l ' xi+l' ••. , xn ) in Xr' The constants c i in (15) are
chosen to scale the conditional utility functions consistently.
Additivity is not surprising in (15) i f we consider utility as
the response in a general factorial experiment. The condition in (14)
implies not only that the XiXj interaction is zero, but also that all
interactions involving X~ and X. are zero (since the simple X.X .
.L J 1 J
interaction at level y is identically zero for all y E XU), Thus
when (14) holds for all i, j E N, all interactions vanish and (15)
There are other ways of producing zero interactions in a utility
model. For example, assume that for I c: N and all XI o,xI 1 E XI'
o 1
XI ' XI E XI' the following condition holds,

00 10 01 11
u(x I ' XI ) - u(x I ,xI) - u(x I ,xI) + u(x I ,xI) O. (16)

This condition directly eliminates interactions between factors in I

l6See Debreu (1959), Engelbrecht (1977), Farquhar (1974, 1975),

Fishburn (1965a, 1973 , 1974, 1977a,b), Pollak (1967), and Richard

and factors in T.
Value independence is another way of producing additivity,

Definition 6: The factors Xl' ••• , ~ are value independent,

denoted Xl' ••• , ~(VI), if and only if, for all p, q E P, P ~ q when-
ever P1 ~ q1' ••• , Pn ~ ~.

Although Definitions 5 and 6 are equivalent, the former is easier to

test empirically because it employs even-chance lotteries over out-
comes that differ on only two factors at a time.
Interdependent additivity
Fishburn (1967a, 1970) presents an interdependent additive util-
ity representation that can account for configura1 interactions. Sup-
pose Sl' ••• , Sm are proper subsets of N and Ui~\ Si ~ N. Recalling
that the marginal distribution of pEP on XS. is denoted by PS.' one
~ ~
can generalize Definition 6 as follows,

Definition 7:The composite factors Xs ' ••• , Xs are value

1 m
independent, denoted Xs ' ••• , Xs (VI), i f and only if, for all
p, q E P, P ~
q whenever Ps
... , Psm
For a fixed outcome xO ~ (x1 0, ••• , xn O) and any outcome x ~
(Xl' ••• , xn) E X, let xeS] denote the outcome with level xi if i E S
and level x i O if i ¢ S, for all i E N. Then,

Theorem 1 (Fishburn, 1967a): If Xs ' ••• , Xs (VI), then the

1 m
utility function u on X has an interdependent additive decomposition,


where the functions v S . on XS. in (17b) are derived directly from the
condHional utilities in (17a).

The idea behind interdependent additivity is that i f one can find

a collection of (possibly overlapping) subsets of factors that are
value independent, then one has identified the factors which interact
configurally. This allows one to reformulate the factors, in a sense,
to achieve an additive representation. Additional work remains on
implementing these results in utility analysis.
In another paper, Fishburn (1972) describes the degree of inter-
dependence of a preference order> on X, denoted D(>, X), as the
largest number of factors in an interaction needed for an interdepen-
dent additive representation of > on X. If D(>, X) ~ 1, then the
utility function is additive. On the other hand, D(>, X) =n corre-
sponds to complete interdependence among Xl' ••• , Xn ' as in the prepa-
ration of a pharmaceutical drug, for example, where n ingredients must
be combined in precise proportions. The degree of interdependence is
also related to the concept of resolution in fractional replication
(e.g., see John (1971) and Raghavarao (1971».
Utility independence
Preference independence is defined for conditional orders on sure
outcomes (see Definitions land 4). Its analog in risky decision
problems is utility independence,

Definition 8: XI is utility independent XI' denoted XI(UI), if

and only if, >y >z on PI for all y, z E XI'

Since XI(UI) implies that all conditional preference orders on PI

are identical, then all conditional utility functions on XI must pre-
serve the same order and consequently are related by positive linear
transformations. Thus for an arbitrary fixed element yO E XI'

for all xI E XI and all y E XI' where rx and S are real-valued func-
tions on XI with S > 0, and u(xI\y) is a utility function on XI for
the conditional preference order >y on PI.
One can extend utility independence in the same manner as prefer-
ence independence to include complete reversals of preference and com-
p1e te indi f ference,

Definition 9: XI is generalized utility independent of XI'

denoted XI(GUI), if and only if, there exists a nonempty >0 on PI such
that for all y E XI' >y E (>0' >~, 0}.

The result in (18) is used to prove

Theorem 2 (Keeney, 1972): If Xi (UI) for all i E N, then the

utility function u on X has a multilinear (or quasi-additive) decom-


u(X 1 , ..• , x . ) = \ cI[TTu.(x.)], (19)

n L 'EI ~ ~
Is::N ~

for all (xl' ••• , xn ) E X, where c I are scaling constants defined by

... ,
a i E [0, 1} for i E I, a i ° for i E 11, (20)

for all I s:: N, where III is the size of the set I.

For example, the multilinear utility decomposition with n 3

factors is

c 1u 1 (x 1 ) + c 2u 2 (x 2 ) + c 3u 3 (x 3 )

+ c12u1(x1)u2(x2) + c13u1(xl)u3(x3) + c23u2(x2)u3(x3)


When the factors are jointly utility independent instead of indi-

vidually utility independent, one obtains a stronger decomposition,

Theorem 3 (Pollak, 1967; Keeney, 1974): If XI(UI) for all Ie N,

then the utility function u on X has either an additive decomposition,


or a multiplicative decomposition,

\' k 1I1 - 1 [l1 cou o(x o

L. iEI ~ ~ ~
)J, (23)

where k is obtained from the equation 1 + k = l1iEN(l + kc i ).

One observes that individual utility independence, Xi(UI) for

i E N, does not imply joint utility independence, X1(UI) for leN.
One reason is in general c I "TIiEIc i • The assumption of joint util-
ity independence is equivalent to various sets of conditions that are
much simpler to test in practice. 17 One also observes that as the
number of factors increases, it becomes difficult to assess the 2n - 1
scaling constants in the multilinear decomposition, but the additive-
multiplicative decomposition requires only n scaling constants.
Given the joint utility independence assumption of Theorem 3,
there are several ways of determining whether or not the decomposition
is additive. Keeney (1974) shows that the additive decomposition
holds if and only if c1 + ••. + cn = 1 (or equivalently, when k = 0).
Another method of determining additivity is to find some xi ' xi
o 1
E Xp
Xj o, Xj 1 E Xj , and y E X;ij' such that

l7For example, see Fishburn and Keeney (1974), Gorman (1968),

Keeney (1974), Keeney and Raiffa (1976), Meyer (1970), Pollak (1967),
and others.


.When the factors are jointly utility independent, one indifference

judgment of the type in (24) is sufficient for additivity. On the
other hand, several indifference judgments of this type can establish
additivity without a test for joint utility independence (see Defini-
tion 5).
As one progresses from an additive model to various nonadditive
models, it is helpful to separate factorial effects and risk effects
whenever possible. The I\Ulltiplicative decomposition in (23) has
interdependencies which are induced by risk, because a I\Ulltiplicative
utility is equivalent to an additive utility in the absence of risk.
In other words, there exists an additive utility which gives the same
ranking of outcomes in X as a I\Ulltiplicative utility. One also
observes no factorial interactions in a I\Ulltiplicative decomposition,
because Xr(UI) implies Xr(PI) for all r C N.
On the other hand, the I\Ulltilinear decomposition in (19) exhibits
interdependencies depending on both factorial and risk effects. The
risk effects apparently give the same structure as (23). An unusual
aspect of the multilinear utility decomposition is that factorial
interactions appear only in the scaling constants. Later on, we con-
sider the relationship between fractional replication and various
utility models. The multilinear decomposition in (21), for example,
corresponds to the fractional replicate [llO, 101, 011, lll}, which
allows estimation of main effects assuming all interactions are zero.
Furthermore, the alias structure of this fractional replicate explains
how main effects are confounded with interaction components in the
I\Ulltilinear decomposition.
Since Xi(Ur) implies Xi(PI) for i E N, the factorial interactions
in the multilinear decomposition are ordinal. Thus the qualitative
interpretation of the main effects ui(x i ) is unaffected by such

interactions. This observation has been overlooked in many applica-

Fishburn and Keeney (1975) show that one can obtain necessary and
sufficient conditions for the decompositions in Theorems 2 and 3 by
replacing utility independence with generalized utility independence.
In the multilinear model, the factorial interactions are ordinal when
Xi(UI) for i E N, but they are configural when Xi(GUI) for i E N. The
configural relationship, however, allows only complete reversals of
cunditional preference orders or complete indifference, so configural
interactions are quite restricted.
Scaling constants
The interpretation of the scaling constants in utility decomposi-
tions is almost the same as the interaction effects in 2n factorial
designs. If we modify the star notation slightly, the scaling con-
stants in (20) can be expressed as

... , where ai
= {0
if i EI} '
if i ~ I for leN. (22)

Thus the scaling constant c r is analogous to the interaction effect

among Xi for i E I, in a 2n factorial experiment. Figure 2 therefore
depicts the scaling constants for n 3 factors, and Figure 3 (with no
divisor) shows how to calculate the scaling constants from the "corner
utilities." Farquhar (1974, chapter 7), Fischer (1976, pp. 140-141),
and Keeney (1972, pp. 282-285) further discuss the meaning of scaling
constants in utility decompositions.
We also observe that the factors in I are independent of the fac-
tors in I (see equation (14» if and only if, all scaling constants of
the form c J ' where J intersects both I and Y, are zero. This is the
only way to eliminate interactions between factors in I and factors in
Y in the multilinear decomposition, for example.

Partial decompositions
There are important differences between the concepts of inter-
action in utility theory and those in factorial experiments. The
introduction of risk in a decision problem yields interdependencies
not found in factorial experiments. Also, some interactions in uti1-
ity theory are directional in nature. For example,
In studying the health hazards posed by environmental
pollutants, two factors are widely used: Xl' the minimum
acute toxicity of the effluent, and X2' the ambient back-
ground concentration. Over certain ranges of each factor,
it is reasonable to assume that toxicity is utility inde-
pendent of concentration. However, concentration is never
utility independent of toxicity. Thus Xl (UI)X 2 , but not
X2 (UI)X 1 •

Keeney (1971) and Fishburn (1974) show that i f Xl is utility

independent of Xz but not conversely, then the following partial
decomposition results


For obvious reasons we refer to ui(x i ) as the "main effect" of factor

Xi. The partial decomposition in (23) also exhibits an "interaction
component" for XZ' fZ(x Z) = [u(x 1 1 ,x Z) - u(x l o,xZ)J/c Z ' which measures
the effect of changes in Xl on the factor XZ. Since Xl is utility
independent of XZ' the interaction component for Xl is f 1 (x l ) =
1 0
[u(xl'x Z ) - u(xl'x 2 )J/c l = O. The directional nature of the inter-
action between Xl and Xz is thus reflected in (23) by the absence of
an Xl interaction component and the presence of an Xz interaction com-
An interesting feature of the rnulti1i'near decomposition is that
interaction components are all zero, yet interdependencies among fac-
tors are found in the scaling constants.

Partial decompositions, based on incomplete sets of utility inde-

pendence conditions, contain interaction terms which can represent
more general configural relationships. Grochow (1972), Keeney (1971),
Keeney and Raiffa (1976), and Nahas (1977) discuss several partial
decompositions. Nahas (1977) presents a scheme for classifying all
preference structures on n factors according to the collection of
utility independence conditions which are satisfied.
Main effects and interactions in utility theory
Formal definitions of main effect and interaction functions in
multi factor utility theory are presented next.
Let xO = (x 10, ••. , xn O) be a base outcome and let xl
1 1
(xl' ••. , xn ) be a unit outcome satisfying the scaling conditions
specified after (15). For arbitrary I = [i p ... , i r } C Nand y E XI'
we define the star operator by

... , x. r , y):

aj E (0, l), 1 :s: j ,;; d. (24)

If we suppress factors set at base level, the scaling constants can be

expressed as c. . = u(x* , ... , x* ). We interpret (24) as a
~l··· ~r ~l ~r
specific measure of the X. •.. X. interaction conditioned on level y
~l ~r
of the remaining factors.
A more general measure of the X • ••• X. interaction conditioned
~l ~r
on y E XI is given by the shift operator,

r+L:a. al
... ,
u(~;:. , •.• , ~i ' y) = 6 [(-1) J u(x. , x. r , y):
-~l r ~l ~r

ai E [0, blank}, 1 :s: i ~ rJ, (25)

where a.~ = ° iff 01.

= ° and a. ~
= 1 iff a.~ = blank, where "blank"
denotes the absence of any superscript. For example,

and u(~l' ~2'

° °
+ u(x l ' x 2 '
° °
Note that u(~i ' ••• , ~i ' y) = whenever xi = xi for i E I.
1 r
For n = 3 factors, the main effects are defined as


These main effects represent a standardized increment in utility in

moving from the base level x i o to some level xi while the other fac-
tors are fixed at base level. This definition of main effects is
comparable to 2n factorial designs with two exceptions: the shift
operator replaces the star operator, and base levels replace averages
over the remaining factors. The interpretation of main effects is
essentially the same in both cases.
The interactions are defined in a similar manner,

A two-factor utility interaction is interpreted as a difference

between simple main effects in the same manner as two-factor inter-
actions in 2n factorial designs. Similar interpretations hold for
higher-order interactions. These definitions for n = 3 factors are
readily generalized to an arbitrary number of factors.
Fractional hypercubes
A hypercube in n dimensions is the collection of 2n binary n-
vectors, denoted (0, l}n. In other words, a hypercube is a complete
2n factorial design. A fractional hypercube, like a fractional repli-
cate, is a subset of (0, Un satisfying certain properties,

Definition 10: A fractional hypercube Hi for i E N is a subset

of [0, l}n such that

Thus, a fractional hypercube is a collection of vertices in an

n-dimensional hypercube such that no two vertices lie on any edge
parallel to a specified dime.nsion i. A primal fraction Fi is a frac-
tion containing the apex (1, ••• , 1). The dual fraction Fi' of a
primal fraction Fi is defined by

implies (a l , ... ,

Figure 6 illustrates some fractional hypercubes for i 1 in

three dimensions. A primal fraction is represented by a set of black
circled vertices and its dual fraction by the corresponding set of
white circled vertices in each cube.
Farquhar (1974, 1975, 1976, 1977a) explains how to use these
fractional hypercubes to obtain multiple-element conditional prefer-
ence orders and their associated generator functions. These are used
to derive various independence conditions and corresponding utility
decompositions. Since these topics are adequately covered elsewhere,
we do not discuss them here.
The methodology based on fractional hypercubes offers (1) a pro-
cedure for illustrating and testing various independence conditions in
utility theory, (2) a mechanism for generating additive and nonaddi-
tive utility decompositions, and (3) a means of interpreting the fac-
torial interactions found in independence conditions and utility
decompositions. Therefore the fractional hypercube approach provides
a unified method of analyzing additive and nonadditive relations in
multifactor utility theory.

}-- Xl

" .-

(a) Apex (b) Diagonal


)-- Xl .c&- Xl
X3 X3

(c) Quasi-pyramid (d) Semicube

Figure 6: Primal and dual fractions for Xl in three dimensions

Alias structures in utility decompositions

We now examine the patterns of·factorial interactions in utility
decompositions generated by fractional hypercube methods. Consider
the following iden ti ty, which is analogous to (7b).

The purpose of independence conditions in utility theory is to deter-

mine if the interactions in (28) can be simplified or deleted.


If Xl' X2 ' and X3 are value independent, the interactions in (28)

vanish leaving an additive model. If X12 ' Xl3 ' XZ3 ' are value inde-
pendent, for example, the interdependent additive form in (l7a) yields
(28) with clZ3ulZ3(xl' x z ' x 3 ) deleted. Other sets of value indepen-
dent composite factors produce similar results--selected interaction
terms vanish in (28).
If an interaction term cannot be eliminated, perhaps it can be
broken down into simpler components. The interaction among
X. , ••. , X. is separable if there are single-factor functions f.
11 1r 1j
on X. such that


Separable and nonseparable interaction terms are illustrated by the

following fractional hypercube decompositions,

u(x l ,xZ 'x 3 ) = clul(x l ) + c Zu Z(x2) + c 3u 3 (x 3 ) + c12ul(xl)u2(xZ)




u(x l ,x 2 ,x 3 ) = clul(x l ) + c 2u 2 (x 2 ) + c 3u 3 (x 3 ) + c12u12(xl,x2)




The apex decomposition in (30) is precisely the same as the

multilinear decomposition in (19). Each interaction term in (30) is
separable into a product of the main effects ui(x i ). This phenomenon
is called aliasing, because the main effects and interaction compo-
nents are confounded. As a result one can generate the following
relationship between the main effect ul(x l ) , for example, and parti-
cular interaction functions involving Xl and the other factors,

u(~1,x20,x3*) u(~l'x2*' x 3*)

c 13 c 123 (34)

Thus in the apex or multilinear decomposition, the main effect of

Xl is confounded with XI X2 , XI X3 , and Xl X2X3 interactions. One
assumes implicitly in using the multilinear form in (30) that these
interactions are of little importance and need not be estimated sepa-
rately. Although this assumption simplifies the assessment task
tremendously, it also limits the interactions incorporated in the

decomposition as indicated by (34).


The diagonal or bilateral decomposition in (31) has separable

interactions, but main effects are not confounded with interaction
components. Instead, the function f.(x.)
~ ~
appears as the component in
all interactions involving X~. .L
Suppose x-.-
O and x-:- l denote all factors

except i at base level and at unit level, respectively. In deriving

(31), one finds that

u(x. 1
,X-:- ) - u(x. ,X-:- )
-~ ~ -~ ~
f.(x.) (35)
~ ~

so f.~ (x.)
represents the interaction between X.~ from x.~ O to x.~ and X-.-

o 1
from xI' to xI' Hence fi(x i ) is a meaningful single-factor function
to use as an interaction component.
The alias structure of the diagonal decomposition in (31) is
revealing. The main effects ui(x i ) are not confounded, but the inter-
action components are confounded with each other. For example,

u(.2S.l'x Z* ,x 3 ~ u(.2S.l'x Z* ,x/)

clZ clZ3

The interaction component f1(x 1 ) is therefore confounded with X1X Z'

X1X3 , and X1XZX3 interactions.
The quasi-pyramid decomposition in (3Z) contains main effects,
nonseparable two-factor interactions, and separable higher-order
interactions expressed as products of main effects. Thus with the
exception of the two-factor interactions, the quasi-pyramid and multi-
linear decompositions are the same. The alias structure for ul(x l ) in
(3Z) is given by
o 0
u(.2S.l'x Z ,x 3 ) u(.2S.l'x Z* ,x 3*)
c lZ3 (37)

The main effect of Xl is confounded with the X1X2x3 interaction, but

not with any of the two-factor interactions.
Finally, the semicube decomposition in (33) has nonseparable
interactions like (28), except for the n-factor interaction which is
separable. The component of Xl in the three-factor interaction in
(33) is given by

which represents a particular three-factor interaction. There is no

aliasing in the semicube decomposition. Although this decomposition
is the most general available, it is obviously the most complicated
to assess.
Aliasing in fractional hypercube decompositions is closely linked
to aliasing in fractional replication. One can obtain the alias
structures for the decompositions in (30) - (33), for example, from a
comparison of the table of signs for calculating factorial effects in
Figure 3 and the primal fractions in Figure 6. Thus a fractional
hypercube irmnediate1y characterizes the pattern of factorial inter-
actions found in the corresponding utility decomposition.
Other topics in utility theory
There are other topics in utility theory where interaction con-
cepts are found. Richard (1975), for example, presents a concept of
multivariate risk aversion which is based on interactions. Enge1-
brecht (1977) generalizes this work to arbitrary utility functions in

Definition 11: A decision maker is multivariate risk averse,

neutral, or seeking, as

- u(x i
o,Xj 0 ,y) o 1 1 1
u(x i ,Xj ,y) + u(x i ,Xj ,y) (39)

is s: 0, = 0, or :l! 0, respectively, for all distinct i, j E Nand

x k O ~ xk l with ~(xkO) s: uk(xk l ) for k = i, j.

Multivariate risk neutrality is identical to additive indepen-

dence in Definition 5. One observes in (39) that Aij = u(xi*,Xj*'Y)
is a two-factor interaction function which also yields information
about risk effects. Fishburn (1977a,b) uses a strict version of
nrultivariate risk aversion, called conservatism, to develop approxi-
mation results for multifactor utility functions. One might also
study nrultivariate risk behavior with higher-order interactions.
In the study of time-dependent preferences, configural inter-
actions are commonly found between adjacent periods in a time stream.
Bell (1974, 1977a) , Fishburn (1965b), Meyer (1970), and others have
developed various partial decompositions using value, utility, and
conditional utility independence assumptions. Two-factor interactions
are of principal interest in these decompositions.
Kirkwood (1976) examines nonadditive utility decompositions in
which the functional form of the interdependency between factors is
presumably known up to a scalar parameter. This parametric dependence
assumption allows one to model preferences with somewhat complicated,
separable interactions. Bell (1977b) applies a linear form of para-
metric dependence, called interpolation independence, to decomposing
two-factor utility functions. This decomposition yields a main effect
and two interaction components for each of Xl and XZ•
There are other sources of interdependence in multiple criteria
decision problems, such as correlations among outcomes, decisions made
in sequence, and conflicts with nrultiple decision makers, but we do
not consider them here. We have also neglected utility functions on
X C Xl x ••• x Xn , though Fishburn (1967c,d, 1971, 1976) has obtained
some interesting results.


Despite recent interest in theoretical aspects of nonadditive

utility models and the many practical applications of this research,
little attention has been given to the definition, classification, and
interpretation of interaction effects in multifactor utility theory.
This paper develops the linkage between statistical concepts of inter-
action found in factorial experiments and various parts of multifactor
utility decompositions.
Interactions are classified as either artificial, ordinal, con-
figural, or holistic, depending on their substantive complexity. This
tentative framework explains many forms of interdependencies in util-
ity theory. In particular, one can readily interpret the main effects
and interactions in the additive, multiplicative, multilinear, and
other fractional hypercube utility decompositions. The aliasing
structure of a utility model provides information on the balance
between assessment effort and model generality by characterizing the
interaction components in a decomposition. Also, transformations of
factors and responses often yield simpler utility models if the
pattern of interactions is revealed in the original problem. Thus the
explicit treatment of interaction effects in utility analysis can have
several benefits.
Additional research is obviously required to further develop and
apply these concepts in multifactor utility theory. This paper pro-
vides a detailed summary of recent work on interdependent criteria and
establishes several directions for future research.


Adams, E. W. & R. F. Fagot (1959). A model of riskless choice.

Behavioral Science, ~, 1-10.
Anderson N. H. (1969). Connnent on 'An ana1ysis-of-variance model for
the'assessment of configura1 cue utilization in clinical judg-
ment.' Psychological Bulletin, ]J.., 63-65.
Anderson, N. H. (1970). Functional measurement and psychophysical
judgment. Psychological Review, 12, 153-170.
Anderson, N. H. (1972). Looking for configurality in clinical judgment.
Psychological Bulletin, l..§., 93-102.
Anscombe, F. J. (1961). Examination of residuals. Proceedings of the
Fourth Berkley Symposium on Mathematical Statistics and Prob-
ability, 1-36.
Anscombe, F. J. (1973). Graphs in statistical analysis. The American
Statistician, Q, 17-21.
Anscombe, F. J. & J. W. Tukey (1963). The examination and analysis of
residuals. Technometrics, ,2., 141-160.
Ashton, R. H. (1976). The robustness of linear models for decision-
making. Omega, ~, 609-615.
Baker, N. R. & J. R. Freeland (1975). Recent advances in R&D benefit
measurement and project selection methods. Management Science, Q,
Bell, D. E. (1974). Evaluating time streams of income. Omega, 1, 691-
Bell, D. E. (1977a). A utility function for time streams having inter-
period dependencies. Operations Research, £, 448-458.
Bell, D. E. (1977b). Conditional utility functions. Unpublished manu-
script, University Engineering Department and Churchill College,
Cambridge, England (to appear in Operations Research).
Blalock, H. M., Jr. (1965). Theory building and the statistical con-
cept of interaction. American Sociological Review, 30, 374-380.
Bogartz, R. S. & J. H. Wackwitz (1970). Transforming response measures
to remove interactions or other sources of variance. Psychonomic
Science, 19, 87-89.
Bogartz, R. S. & J. H. Wackwitz (1971). Polynomial response scaling
and functional measurement. Journal of Mathematical Psychology, ,§.,
Box, G. E. P. & D. R. Cox (1964). An analysis of transformations.
Journal of the Royal Statistical Society, Series B, 1£, 211-243.
Cochran, W. G. & G. M. Cox (1957). Experimental Designs, 2nd edition.
Wiley, New York.
Cox, D. R. (1958). Planning of Experiments. Wiley, New York.

Davies, O. 1., ed. (1967). Design and Analysis of Industrial Experi-

~, 2nd edition. Hafner, New York.

Dawes, R. M. & B. Corrigan (1974). Linear models in decision making.

Psychological Bulletin, 81, 95-106.
Debreu, G. (1959). Cardinal utility for even-chance mixtures of pairs
of sure prospects. Review of Economic Studies, ~, 174-177.
Debreu, G. (1960). Topological methods in cardinal utility theory. In
K. J. Arrow, S. Karlin, and P. Suppes (eds.), Mathematical
Methods in the Social Sciences, 1959. Stanford University Press,
Stanford, California (1960) 16-20:--
Delbeke, L. & J. Fauville (1974). An empirical test for Fishburn's
additivity axiom. Acta Psycho1ogica, 38, 1-20.
Einhorn, H. J. (1970). The use of nonlinear, noncompensatory models in
decision making. Psychological Bulletin, ll, 221-230.
Einhorn, H. J. (1971). Use of nonlinear, noncompensatory models as a
function of task and amount of information. Organizational
Behavior and Human Performance, .§., 1-27.
Einhorn, H. J. & R. M. Hogarth (1975). Unit weighting schemes for
decision making. Organizational Behavior and Human Performance,
13, 171-192.
Engelbrecht, R. (1977). A note on multivariate risk and separable
utility functions. Management Science, 23, 1143-1144.
Farquhar, P. H. (1974). Fractional hypercube decompositions of multi-
attribute utility functions. Technical Report 222, Department of
Operations Research, Cornell University, Ithaca, New York.
Farquhar, P. H. (1975). A fractional hypercube decomposition theorem
for multiattribute utility functions. Operations Research, Q,
Farquhar, P. H. (1976). Pyramid and semicube decompositions of multi-
attribute utility functions. Operations Research, 24, 256-271.
Farquhar, P. H. (1977a). A survey of multiattribute utility theory and
applications. In M. K. Starr and M. Zeleny (eds.), Multiple Crite-
ria Decision Making, North-Ho11and/TIMS Studies in the Management
Sciences, £, 59- 890
Farquhar, P. H. (1977b). How to buy the best bird: a utility analysis
of stock selection for poultry farm managers. Presented at ORSA/
TIMS Meeting, November 1977, Atlanta, Georgia.
Farquhar, P. H. & V. R. Rao (1976). A balance model for evaluating
subsets of mu1tiattributed items. Management Science, 22, 528-539.
Fischer, G. W. (1972). Multidimensional value assessment for decision
making. Technical report 037230-2-T, Engineering Psychology Labo-
ratory, University of Michigan, Arm Arbor, Michigan.
Fischer, G. W. (1976). Multidimensional utility models for risky and
riskless choice. Organizational Behavior and Human Performance,
17, 127-146.

Fischer, G. w. (1977). Convergent validation of decomposed multiattri-

bute utility assessment procedures for risky and riskless deci-
sions. Organizational Behavior and Human Performance, 18,295-315.
Fishburn, P. C. (1965a). Independence in utility theory with whole
product sets. Operations Research, 13, 28-45.
Fishburn, P. C. (1965b). Markovian dependence in utility theory with
whole product sets. Operations Research, 13, 238-257.
Fishburn, P. C. (1966). A note on recent developments in additive
utility theories for multiple-factor situations. Operations
Research, 14, 1143-1148.
Fishburn, P. C. (1967a). Interdependence and additivity in multivari-
ate, unidimensional expected utility theory. International Eco-
nomic Review, ~, 335-342.
Fishburn, P. C. (1967b). Methods for estimating additive utilities.
Management Science, 13, 435-453.
Fishburn, P. C. (1967c). Additive utilities with incomplete product
sets: applications to priorities and assignments. Operations
Research, 15, 537-542.
Fishburn, P. C. (1967d). Conjoint measurement in utility theory with
incomplete product sets. Journal of Mathematical Psychology, ~,
Fishburn, P. C. (1968). Utility theory. Management Science, 14, 335-
Fishburn, P. C. (1970). Utility Theory for Decision Making. Wiley, N.Y.
Fishburn, P. C. (1971). Additive representations of real-valued func-
tions on subsets of product sets. Journal of Mathematical
Psychology, ~, 382-388.
Fishburn, P. C. (1972). Interdependent preferences on finite sets.
Journal of Mathematical Psychology, ~, 225-236.
Fishburn, P. C. (1973). Bernou11ian utilities for multiple-factor
situations. In J. L. Cochrane and M. Zeleny (eds.), Multiple Cri-
teria Decision Making, University of South Carolina Press, Colum-
bia, South Carolina, 47-61.
Fishburn, P. C. (1974). von Neumann-Morgenstern utility functions on
two attributes. Operations Research, 22, 35-45.
Fishburn, P. C. (1976). Utility independence on subsets of product
sets. Operations Research, 24, 245-255.
Fishburn, P. C. (1977a). Approximations of two-attribute utility func-
tions. Mathematics of Operations Research, ~, 30-44.
Fishburn, P. C. (1977b). Approximations of multiattribute utility
functions. Unpublished manuscript, Pennsylvania State University,
University Park, Pennsylvania.
Fishburn, P. C. & R. L. Keeney (1974). Seven independence conditions
and continuous multiattribute utility functions. Journal of
Mathematical Psychology, 11, 294-327.

Fishburn, P. C. & R. L. Keeney (1975). Generalized utility independ-

ence and some implications. Operations Research, 23, 928-940.
Goldberg, L. R. (1968). Simple models or simple processes? Some
research on clinical judgments. American Psychologist, Q, 483-
Goldberg, L. R. (1969). The search for configural relationships in
personality assessment: the diagnosis of psychosis vs. neurosis
from the MMPI. Multivariate Behavioral Research, ~, 523-536.
Goldberg, L. R. (1971). Five models of clinical judgment: an emp1n-
cal comparison between linear and nonlinear representations of
the human inference process. Organizational Behavior and Human
Performance, ~, 458-479.
Gorman, W. M. (1968). The structure of utility functions. Review of
Economic Studies, 35, 367-390.
Green, B. F., Jr. (1968). Descriptions and explanations: a comment on
papers by Hoffman and Edwards. In Benjamin Kleinrnuntz (ed.),
Formal Representations of Human Judgment, Wiley, New York, 91-98.
Green, P. E. (1973). On the analysis of interactions in marketing
research data. Journal of Marketing Research, 10, 410-420.
Green, P. E. & F. J. Carmone (1970). Multidimensional Scaling and
Related Technigues in Marketing Analysis. Allyn & Bacon, Boston,
Green, P. E. & F. J. Carmone (1974). Evaluation of multiattribute
alternatives: additive versus configural utility measurement.
Decision Sciences, 1, 164-181.
Green, P. E. & M. T. Devita (1974). A complementarity model of con-
sumer utility for item collections. Journal of Consumer Research,
1:., 56-67.
Green, P. E. & M. T. Devita (1975). An interaction model of consumer
utility. Journal of Marketing Research, ~, 146-153.
Green, P. E. & Y. Wind (1973). Multiattribute Decisions in Marketing:
A Measurement Approach. Dryden Press, Hinsdale, Illinois.
Green, P. E., Y. Wind & A. K. Jain (1972). Preference measurement of
item collections. Journal of Marketing Research, 2., 371-377.
Grochow, J. M. (1972). A utility theoretic approach to evaluation of
a time-sharing system. In W. Freiberger (ed.), Statistical Com-
puter Performance Evaluation, Academic Press, New York, 25-50.
Hauser, J. R. & G. L. Urban (1976). Direct assessment of consumer
utility functions: von Neumann-Morgenstern utility theory applied
to marketing. Working paper 76-17, Graduate School of Management,
Northwestern University, Evanston, Illinois.
Hoffman, P. J. (1960). The paramorphic representation of clinical
judgment. Psychological Bulletin, 57, 116-131.
Hoffman, P. J. (1968). Cue-consistency and configurality in human judg-
ment. In B. Kleinmuntz (ed.), Formal Representations of Human
Judgment, Wiley, New York, 53-90.

Hoffman, P. J., P. Slovic, & L. G. Rorer (1968). An analysis-of-

variance model for the assessment of configural cue utilization
in clinical judgment. Psychological Bulletin, .69, 338-349.
John, P. W. M. (1971). Statistical Design and Analysis of Experiments.
Macmillan, New York.
Johnson, E. M. & G. P. Huber (1977). The technology of utility assess-
ment. IEEE Transactions on S stems Man and C bernetics, SMC-7
(Specia Decision Making, 5. ---
Keeney, R. L. (1969). Multidimensional utility functions: theory,
assessment, and application. Technical Report 43, Operations
Research Center, Massachusetts Institute of Technology, Cambridge,
Keeney, R. L. (1971). Utility independence and preferences for multi-
attributed consequences. Operations Research, 19, 875-893.
Keeney, R. L. (1972). Utility functions for multiattributed conse-
quences. Management Science, 18, 276-287.
Keeney, R. L. (1974). Multiplicative utility functions. Operations
Research, 22, 22-34.
Keeney, R. L. & H. Raiffa (1976). Decisions with Multiple Objectives:
Preferences and Value Tradeoffs. Wiley, New York.
Keppel, G. (1973). Desi~n and Anal¥sis: A Researcher's Handbook.
Prentice-Hall, Eng ewood CUf s, New Jersey.
Kirkwood, C. W. (1976). Parametrically dependent preferences for multi-
attributed consequences. Operations Research, 24, 92-103.
Krantz, D. H., R. D. Luce, P. Suppes, & A. Tversky (1971). Foundations
of Measurement, volume 1. Academic Press, New York.
Lee, M. C. (1961). Interactions, configurations, and nonadditive
models. Educational and Psychological Measurement, Q, 797-805.
Lindquist, E. F. (1953). Design and Analysis of Experiments in Psy-
chology and Education. Houghton-Mifflin, Boston, Massachusetts.
Lisnyk, J. A. (1977). Procedures for utility determination in the case
of interactive attributes. Office of Maritime Technology, Maritime
Administration, U. S. Department of Commerce, Washington, D.C.
Luce, R. D. & J. W. Tukey (1964). Simultaneous conjoint measurement:
a new type of fundamental measurement. Journal of Mathematical
Psychology, 1, 1-27.
MacCrimmon, K. R. (1973). An overview of multiple objective decision
making. In J. L. Cochrane and M. Zeleny (eds.), Multiple Criteria
Decision Making, University of South Carolina Press, Columbia,
South Carolina, 18-44.
Marcus-Roberts, H. (1976). On simplifying assumptions in energy models.
In F. S. Roberts (ed.), Energy: Mathematics and Models, Society
for Industrial and Applied Mathematics, Philadelphia, Penn-
sylvania, 268-272.


Moskowitz, H. (1974). Regression models of behavior for managerial

decision making. Omega, ~, 677-690.
Moskowitz, H. (1976). Robustness of linear models for decision making:
some comments. Omega, ~, 743-746.
Myers, R. H. (1971). Response Surface Methodology. Allyn & Bacon,
Boston, Massachusetts.
Nahas, K. H. (1977). Partial decompositions of utility surfaces. Un-
published manuscript, Stanford University, Stanford, California.
Pollak, R. A. (1967). Additive von Neumann-Morgenstern utility func-
tions. Econometrica, 35, 485-494.
Raghavarao, D. (1971). Constructions and Combinatorial Problems in
Design of Experiments. Wiley, New York.
Ramsay, J. O. (1977). Monotonic weighted power transformations to
additivity. Psychometrika, 42, 83-109.
Richard, S. F. (1975). Multivariate risk aversion, utility independ-
ence, and separable utility functions. Management Science, 22,
Scheffe, Henry (1959). The Analysis of Variance. Wiley, New York.
Sidowski, J. B. & N. H. Anderson (1967). Judgments of city-occupation
combinations. Psychonomic Science, 2, 279-280.
Slovic, P., D. Fleissner & W. S. Bauman (1972). Analyzing the use of
information in investment decision making: a methodological pro-
posal. Journal of Business, 45, 283-301.
Slovic, P. & S. Lichtenstein (1971). Comparison of Bayesian and
regression approaches to the study of information processing in
judgment. Organizational Behavior and Human Performance, 6, 649-
744. -
Strauch, R. E. (1974). A critical assessment of quantitative method-
ology as a policy analysis tool. P-5282, The Rand Corporation,
Santa Monica, California.
Tukey, J. W. (1949). A degree of freedom for nonadditivity. Biometrics,
1, 232-242.
von Neumann, J. & O. Morgenstern (1947). Theory of Games and Economic
Behavior, 2nd edition. Wiley, New York.
von Winterfe1dt, D. (1975). An overview, integration, and evaluation
of utility theory for decision analysis. Report 75-9, Social
Science Research Institute, University of Southern California,
Los Angeles, California.

von Winterfeldt, D. & G. W. Fischer (1975). Multi-attribute utility

theory: models and assessment procedures. In D. Wendt & C. Vlek
(eds.), Utilit!, Probability and Human Decision Making. Reidel,
Dordrecht, HoI and, 47-85.
Wainer, H. (1976). Estimating coefficients in linear models: it don't
make no nevermind. Psychological Bulletin, ~, 213-217.
Wallace, W. P. & B. J. Underwood (1964). Implicit responses and the
role of intralist similarity in verbal learning by normal and
retarded subjects. Journal of Educational Psychology, 22, 362-
Wiggins, N. & P. J. Hoffman (1968). Three models of clinical judgment.
Journal of Abnormal Psychology, 11, 70-77.
Wiggins, N. & J. S. Wiggins (1969). A typological analysis of male
preferences for female body types. Multivariate Behavioral
Research, ~, 89-102.
Winer, B. J. (1971). Statistical Principles in Experimental Design,
2nd edition. McGraw-Hill, New York.
Yntema, D. B. & W. S. Torgerson (1961). Man-computer cooperation in
decisions requiring common sense. IRE Transactions on Human Fac-
tors in Electronics, HFE-2, 20-26. Reprinted in W. Edwards and
A. Tversky (eds.), De'CISTOn Making-Selected Readings. Penguin
Books, Baltimore, Maryland (1967), 300-314.

Peter C. Fishburn

College of Business Administration, The Pennsylvania State University

The past decade has seen a dramatic increase in research on all
main areas of multiple criteria decision making, including

formal models of multicriterion choice,

multicriterion evaluation theories, and
multicriterion assessment methodologies.

Formal models of multicriterion choice include algorithms, procedures

and selection paradigms that are designed to choose good or best deci-
sion alternatives from feasible sets. Multicriterion evaluation
theories focus on assumptions about values or preferences and on struc-
tured representations of values or preferences that follow from the
assumptions. Multicriterion assessment methodologies deal with the
elicitation, estimation and scaling of individuals' preferences,
utilities, subjective probabilities, and so forth in multiattribute/
multicriterion situations.
The purpose of this paper will be to review the area of multi-
attribute/multicriterion evaluation theories. To provide a perspective
of how this fits in with the other main areas, consider the interactive
approach to multicriterion optimization described by Geoffrion et al.
(1972). Suppose the feasible set X is a compact, convex subset of a
finite-dimensional Euclidean space and we wish to maximize a concave,
increasing real valued utility function u defined on {(f (x),f (x), ... ,
1 2
fn(x)): x E X}, where each fi is a concave real valued criterion
function. It is assumed that X and the fi are known explicitly but u
has not been assessed. The interactive approach treats this as a
standard nonlinear programming problem with one notable exception. At
each iteration the decision maker provides information about his
preferences in the neighborhood of the current feasible solution which,
when translated into an approximation of the gradient of u at that pOint,
guides the selection of a new feasible solution with a higher u value.
The choice model in this case is a utility maximizing nonlinear program-
ming procedure with the noted exceptional feature. The evaluation

theory behind the model consists of the several assumptions made about
u and the criterion f~nctions, which themselves are marginal utility
functions, plus the presumption that more utility is better than less.
Assessment methodology enters through the specific procedures used at
each iteration to estimate local tradeoffs between criteria. Related
procedures may have been used prio~ to this step to assess the several
criterion functions.
The general formulation used for the survey is presented in the
next section. This is followed by a discussion of classificatory
attributes of the theories. Later sections are organized around a
three-valued attribute whose values correspond to the familiar
categories of decision under certainty (section 4), decision under risk
(section 5), and decision under uncertainty (section 6). Throughout
these sections I shall refer to various choice models when it seems
helpful to do so. However, to emphasize the distinction between
evaluation theories and assessment methodologies, the latter will
receive almost no mention in the main body of the survey. This
deliberate omission is partly rectified in the final section which
presents a brief review of assessment literature.
Several important topics are not covered in the paper, including
aspects of social evaluation and choice [Arrow (1963), Arrow and
Scitovsky (1969), Sen (1970), Fishburn (1973a)] and applications of
multicriterion decision theories [Johnsen (1968), Zeleny (1976a),
Keeney and Raiffa (~176)]. Additional discussion of issues in multi-
objective problem forlnulation is provided by Chipman (1966), Johnsen
(1968), Wilkie and Pessemier (1973), Plott et al. (1975) and Keeney and
Raiffa (1976).

Many writers, including MacCrimmon (1973), differentiate among
attributes, criteria, objectives and goals. Although I shall not adhere
to precise distinctions among these terms, it is useful to note some
differences in their usage.
Attributes are often thought of as differentiating aspects,
properties or characteristics of alternatives or consequences. Criteria
generally denote evaluative measures, dimensions or scales against
which alternatives may be gauged in a value or worth sense. Objectives
are sometimes viewed in the same way, but may also denote specific
desired levels of attainment (to climb Mt. Everest) or vague ideals (to
live the good life). Goals usually indicate either of the latter
notions. Although some writers make a careful distinction between

goals (e.g. potentially attainable levels) and objectives (e.g. un-

attainable ideals), their common usages are more or less interchange-
able. The goals in goal programming (Charnes and Cooper, 1961, 1975,
1977; Charnes et al., 1975; Lee, 1972; Kornbluth, 1973) are ~pecific
levels of criteria variables or functions.
Attribute Mappings
Throughout our discussion we shall let X denote a set of decision
alternatives or potential consequences of decisions. In the multi-
attribute context we suppose that there are n ~ 2 attributes that can
be used to differentiate among the objects in X. We shall assume for
the moment that n is finite and number the attributes from 1 to n.
For each i from 1 to n there is a set X. whose elements are
potential specific "values" or "levels" of attribute i. Elements in Xi
might be numbers, vectors of numbers, qualitative descriptors of
various kinds, and so forth. For each i there is an attribute mapping
f i : X ~ Xi that assigns to each object in X a specific level of the
ith attribute. These mappings should be understood as descriptive or
identification functions that mayor may not have direct evaluative
content. The f.1 functions map each x E X into an n-tuple (f (x),
f (x), ... ,f (x)) which describes x in terms of its "values" on the n
2 n
To illustrate the point that attribute mappings are not necessarily
criterion functions suppose that X is a set of simple probability
measures on the real line. If fi(x) is the ith central moment of x,
then the fi are well defined attribute mappings. If, as in the mean-
variance approach in portfolio theory (Markowitz, 1952, 1959; Tobin,
1965; Sharpe, 1964; Lintner, 1965), it is assumed that preference
increases in mean and decreases in variance, then the first two fi can
be viewed as criterion functions.
Another example arises in traditional consumption theory
(Houthakker, 1961) where X is a set of commodity bundles--vectors of
quantities of goods and services in a finite-dimensional Euclidean
space. Here X is already in a multiattribute form. If it is assumed
that utility increases in each dimension then the xi components have
obvious direct evaluative content. Lancaster (1966, 1971, 1975) argues
that it is more appropriate to first map each x E X into a vector of
characteristics of consumption activity (f (x), ... ,f (x» and then to
1 n
talk about an individual's utility function on the characteristic
When each x is mapped into an n-tuple (f1 (x), ... ,fn(x» in
X x ... xX , it is common to identify x with this n-tuple or to replace x
1 n

by the surrogate of its attribute values. In many cases fi(x) is

abbreviated as xi and we speak about elements in X as n-tuples
(x , ... ,x ) in the product set X x •.• xX , or write Xs;;X x ... xX . In
1 n 1 n 1 n
most actual situations X is a proper subset of the product set:
elements in X x ..• xX that are not in X represent combinations of
1 n
attribute values that are unrealizable or infeasible. Nevertheless,
many of the axiomatic preference theories for multiattribute situations
assume that X = X x •• • xX , or at least that an individual can make
1 n
meaningful comparisons between all pairs of n-tuples in the product set.
There is of course a value assumption embedded in the multi-
attribute mapping x + (f (x), ... ,f (x)), or x + (x , ... ,x ) for short,
1 n 1 n
and the subsequent practice of working with preferences or utilities
on X x ... xX . This assumption says that two elements in X that map
1 n
into the same n-tuple have equal values, or are indifferent. Since
aspects of the holistic nature of x can be lost when it is decomposed
into attributes, this assumption should not be taken lightly.
Criterion Functions
As suggested previously, a criterion function usually indicates a
real valued function on X that directly reflects the worth or value of
the elements in X according to some criterion or objective. These
functions are also referred to as objective functions, goal functions,
scoring functions, ranking functions and utility functions, and they
often represent subjective values on a more or less arbitrary scale.
However, values of criterion functions may have objective content such
as net profits, test scores, times until completion, payback periods,
expected values and market shares.
In a situation with m criteria (j 1, ... ,m) and corresponding
criteria functions g.: X + Re, each x in X is mapped into an m-tuple
(g (x), ... ,g (x)) of criterion values, scores or utilities. It is
1 m
often assumed that preference monotonically increases in each gj'
Some developments based on criterion functions do not explicitly
assume a multiattribute structure for X. A good example of this is the
outranking relations choice methods described by Roy (1971, 1974) and
Bernard and Besson (1971) where the alternatives in X are mapped into
score vectors (gl (x), ... ,~(x)) which are then compared in various ways
to develop outranking or dominance relations. The outranking relations,
which need not be transitive, are then used to identify "good" subsets
of alternatives.
On the other hand, many multicriterion choice models assume that X
has a multiattribute structure. This leads to a composite multi-
attribute-multicriterion mapping x + (g (f (x), ... ,f (x)), ... ,
1 1 n

gm(f 1 (x), . .. ,f n (x»). Frequently X is taken to be a subset of a finite-

dimensional Euclidean space with (f (x), . .. ,f (x» replaced by x itself.
1 n
This is done, for example, in goal programming (see earlier references),
in various approaches to interactive programming (Saska, 1968;
Benayoun and Tergny, 1969; Benayoun et al., 1971; Geoffrion, 1970;
Geoffrion et al., 1972; Boyd, 1970; Dyer, 1972, 1973; Zionts and
Wallenius, 1976), in vector maximization's search for undominated
alternatives (DaCunha and Polak, 1967; Geoffrion, 1968; Philip, 1972;
Benson and Morin, 1977), and in multiobjective linear programming
(Zeleny, 1974; Yu and Zeleny, 1975, 1976), domination structure
analysis (Yu, 1974; Bergstresser et al., 1976) and Zeleny's (1976b)
parametric goal programming approach. Additional discussions of several
of these topics are provided by Roy (1971) and Hirsch (1976).

This section identifies attributes that differentiate multi-
attribute/multicriterion evaluation theories and indicates the classi-
fication of theories that will be followed in ensuing sections.
A Basic Trichotomy
The main attribute that I shall use to classify evaluation
theories is the extent to which risk or uncertainty explicitly enters
the theory. This will be treated as a three-valued attribute whose
values are similar to the categories of decision making under certainty,
under risk, and under uncertainty (Luce and Raiffa, 1957).
The first of the three main categories includes evaluation
theories that do not explicitly use probabilities or uncertain events
in the evaluations. This category includes a large number of theories
of preference and utility (Luce and Suppes, 1965; Fishburn, 1970a;
Chipman et al., 1971). An excellent example in economic theory is
provided by Debreu (1959a).
The second main category encompasses theories that explicitly
involve probability or risk. This includes a number of multiattribute
theories (Fishburn, 1970a, 1977a; Farquhar, 1976b; Keeney and Raiffa,
1976) that are based on the von Neumann-Morgenstern (1947) expected
utility theory. It also includes a variety of other models for
comparing gambles or risky alternatives (Rapoport and Wallsten, 1972;
Payne, 1973; Slovic et al., 1977; Libby and Fishburn, 1977; Fishburn
and Vickson, 1977).
The third category involves evaluation theories that explicitly
consider uncertain events or states of the world. The best known
theory for this case is probably the Ramsey-Savage personalistic

expected utility theory (Ramsey, 1931; Savage, 1954). Although this is

not always thought of as a multiattribute theory, it can certainly be
viewed as such with the event set and consequence set comprising the
two principal attributes.
Other Classificatory Attributes
We now consider briefly seven other aspects that can be used to
differentiate among multicriterion evaluation theories.
a. The number and nature of the attribute and/or criterion
functions. Some theories are designed primarily for specific types of
attribute/criterion structures, such as when each Xi = {D,l} or each Xi
is a finite qualitative set or each Xi is a continuum of real numbers.
Most of the theories discussed later apply to finite numbers of
attributes/criteria, but infinite sets of attributes are also used.
An example of the latter arises in the denumerable-period time pref-
erence theory of Koopmans (1960, 1972b) and others (Koopmans et al.,
1964; Diamond, 1965; Burness, 1973, 1976).
b. The structure of the feasible set of alternatives or conse-
quences. This aspect differentiates among structures of feasible sets
of alternatives or consequences. In addition, it makes a distinction
between axiomatic theories that assume that X or {(g (x), ... ,g (x»:
1 m
x E X} is a Cartesian product set and those that assume only that X is
some subset of a product set.
c. The basis of evaluation. This refers to the nature of the
value construct(s) on which the theory is based. For example, many
axiomatic theories are based on a holistic binary preference relation
on the set of objects being evaluated. Other theories use a quaternary
preference-intensity comparison relation or employ a family of pref-
erence relations for different criteria. Still other theories are
based on choices, including revealed preference theorY (Sanuelson, 1938,
1948; Houthakker, 1950, 1961; Richter, 1966; Chipman et al., 1971;
Shafer, 1975) and "stochastic" preference/utility theory (Quandt, 1956;
Luce, 1958, 1959; Luce and Suppes, 1965; Chipman, 1960a; Marschak,
1960; Marley, 1968; Tversky, 1972a, 1972b; Fishburn, 1973b). Most of
the theories discussed in later sections are either based directly on
binary comparisons or can be interpreted in this manner.
d. Ordering assumptions. When binary relations are involved in
the evaluative theory, this aspect distinguishes among these relations
according to properties such as transitivity, asymmetry, reflexivity
and completeness. Two commonly used assumptions for an asymmetric
preference relation ~ ("is preferred to") are transitivity (x ~ y and

y ~ z ~ x ~ z) and negative transitivity (x ~ z ~ either x ~ y or

y ~ z). A relation that is asymmetric and transitive will be called a
strict partial order, and one that is asymmetric and negatively
transitive (and hence transitive also) will be called a strict weak
order. When an indifference relation ~ is defined from ~ by x ~ Y if
and only if neither x ~ y nor y ~ x, it is an equivalence relation
(reflexive, symmetric, transitive) provided that ~ is a strict weak
order; in this case the preference-or-indifference relation t (x t y
~ x ~ y or x ~ y) is a weak order (reflexive, complete, transitive).

Some writers, including Aumann (1962, 1964a, 1964b), Kannai (1963),

and Roy (1973) and Hirsch (1976), do not assume that the preference-or-
indifference relation is complete and therefore add an incomparability
relation to the preference and indifference relations.
e. Independence assumptions. Notions of independence among
attributes or criteria in an evaluative sense are very common in multi-
criterion theories. For example, the assumption that global or
holistic preference increases with an increase in any criterion value
is an independence assumption. In expected utility theories the basic
independence axioms refer to evaluative independence between the risk
(probability) attribute and the consequences attribute.
f. Degree of compensatoriness. In the Euclidean space context,
the attributes or criteria are compensatory if local changes that
preserve indifference can be made around any point in the space. Non-
compensatory preferences obtain when compensating tradeoffs among
attributes or criteria are not possible. Various intermediate cases
arise between the fully compensatory and noncompensatory extremes.
This aspect of evaluation theories is often associated with the presence
or absence of continuity or Archimedean axioms.
g. Extent to which the decision maker's subjective judgments are
involved in the evaluation. This attribute is concerned with the
extent to which different decision makers in the same type of situation
using the same evaluative model may have different evaluative
realizations (Libby and Fishburn, 1977). For example, if X is a set of
probability distributions on the real line and if the evaluative model
is a mean-variance dominance model then the resultant dominance
relation will be independent of the decision maker. This is true also
for the vector dominance relation in multicriterion cases if each
criterion function is ordinally equivalent across decision makers.
Goal programming may require more information of the individual in the
form of goals or acceptable levels on each attribute or criterion

along with relative judgments of the seriousness of deviations from the

goals. Most compensatory preference models presume that different
decision makers will have different tradeoff structures.
The importance of this last aspect for differentiating among
choice models and their corresponding evaluation models cannot be over-
emphasized. For example, a desire to develop choice models that do not
actively involve the decision maker in the evaluative phase, or that
require minimal inputs from him, has motivated many of these models.
The lack of more active involvement of the decision maker is often
defended by arguments that revolve around his inaccessibility or
unidentifiability, his unwillingness or inability to reveal his pref-
erences, and his lack of clarity about his own preferences and the
subsequent problems this implies for assessment procedures.


This section reviews multiattribute/multicriterion evaluation
theories that do not explicitly use probabilities or uncertain events.
We shall assume here that the set of objects to be evaluated is a sub-
set X of a product set X xX x ... xX. The evaluation might concern
1 2 n
either a global preference/utility function or structure on X with each
Xi an attribute or the range of a criterion function, or it might refer
to one of the criterion functions gj defined on X. I shall let>
denote some form of strict preference relation or "better than"
relation on X, which mayor may not be transitive. The only basic
condition imposed on > is asymmetry: x > y and y > x cannot both hold
for any x,y E X.
Much of our discussion will center around independence assumptions
for> on X. We shall say that Xi is independent of the other attrib-
utes/criteria if and only if (a1,···,xi,···,a n ) > (a1""'Yi, ... ,an ) =
(b , ... ,xi'" .,b ) > (b , ... ,y., ... ,b ) for all cases in which the four
1 n 1 1 n
n-tuples are in X. Given that Xi is independent, we can unambiguously
define an asymmetric relation >i on Xi from> on X by

for some (a , ... ,x i , ... ,a ),(a , ... ,y., ... ,a ) E X.

1 n 1 1 n

Note here that if X is so sparse in X x ... xX that there are never two
1 n
n-tuples in X that have the same values of Xi for all but one i and
have different values of Xi for the other i, then all Xi are trivially
independent with >i empty for each i. It is partly for this reason

that many evaluative theories assume that X is either equal to or is a

"large" subset of X x ... xX .
1 n
More generally, we shall say that a subset {X.: i E I}, or I for
c 1
short, is independent of its complement I = {l, ... ,n}\I, if and only
if (xi for i E I, a i for i E I C) > (Yi for i E I, a i for i E I C) ~
(xi for i E I, b i for i E I C) > '(Yi for i E I, b i for i E I C) for all
cases in which the four n-tuples are in X. When I is independent of
IC, a relation >1 on the product of the Xi for i E I can be un-
ambiguously defined in the obvious way from> on X.
In the rest of this section I shall first say a few words about
interdependent preferences and then look at various independent cases.
The section concludes with some remarks about preference intensity
The General Interdependent Case
Apart from general discussions about various types of preference
orders and utility functions for a binary relation> on X (Luce and
Suppes, 1965; Fishburn, 1970a, 1973c; Krantz et al., 1971), relatively
little specific theory has been developed for interdependent pref-
erences/utilities on product sets. I shall note four developments in
this area, all of which assume that > is a strict weak order on X and
that there exists a real valued utility function u on X such that,
for all x,y E X, x > y iff u(x) > u(y).
Debreu (1960) has suggested for the consumption theory context
that independence for some subsets of goods may fail when other subsets
of goods (e.g. clothing goods, foods, etc.) are independent of their
complements. The latter subset independence may then lead to an
additive utility representation over these subsets. In a related
article Gorman (1968) discusses implications of independence for
families of subsets of goods or attributes.
Roskies (1965) and Krantz et al. (1971) present axioms for> on
X x ... xX which imply that u can be written in a multiplicative form
1 n
as u(x , ... ,x ) = u (x )u (x ) ... u (x). If every u i has constant
1 n 1122 nn
sign (positive everywhere or negative everywhere) then this repre-
sentation is ordinally equivalent to the independent additive
representation. However, if some u i does not have constant sign then
independence does not hold. The simplest independence-type assumption
that must hold for the multiplicative case is a sign dependence axiom
which says that, for each nonempty proper subset I of {l, ... ,n}, any
two nonempty conditional preference orders over the product of the Xi
for i E I with the values of the other Xi held fixed must be equal or
else be the duals (reverses) of one another.

Fishburn (1972) defines the degree of interdependence of > on X

as the highest order of preference interaction among the attributes
that must be used in writing u in an ordinally equivalent additive
form. For example, if n = 3 and u can be written as u (x ,x ) +
12 1 2
u (x ,x ) + u (x ,x ), then the degree of interdependence is no
13 1 3 23 2 3
greater than 2. Degrees of interdependence that exceed 1 are not
necessarily incompatible with the independence of each Xi from the
other attributes.
Finally, consider the case in which X is a compact and convex
subset of a finite-dimensional Euclidean space with n ~ 2 and u is
continuous (Debreu, 1964; Fishburn, 1970a) with a unique maximum at an
ideal point (Coombs, 1964; Davis et al., 1970, 1972; Srinivasan and
Shocker, 1973a) x* E X. Suppose further that u decreases along every
ray away from Xl. If u decreases in a fully symmetric fashion away
from Xl, as when the isoutility or indifference contours are circles or
spheres with centers at Xl, then each Xi is independent of the other
attributes. However, if a nonsymmetric distance function is used to
scale utility then independence will generally not hold.
Independent Compensatory Transitive Preferences
The traditional theory under this heading assumes that > is a
strict weak order on X = x 1 x ... xX n , that each Xi is independent of the
others, and that compensatory trade-offs exist between attributes.
When X is infinite, it assumes also an order-denseness or continuity
assumption (Debreu, 1954; Fishburn, 1970a; Krantz et al., 1971) so
that there exists a real valued u on X that preserves the order of >.
It then follows that each >i on Xi is a strict weak order with an
order-preserving utility function u i on Xi' and that u can be written
as u(x) = v(u 1 (x 1 ), ... ,un (xn )) with v increasing in each u 1..
Simple examples show that the neat situation just described can
become very confused when either X is a proper subset of X x •• • xX or
1 n
when> is not a strict weak order. However, little research has been
done in this area apart from specific cases noted below.
Despite the fact that there seems to be no widely accepted
rigorous definition of compensatoriness in the independence context, as
a minimum we might say that attributes i and j are compensatory if and
only if there are xi >i Yi' xi >i Yi, Xj >j Yj , xj >j Yj and a,b in
the product of the other n-2 attributes such that

(xi,yj,a) > (yi,xj,a) and

(Yi,Xj,b) > (x ,Yj,b),

perhaps with one or both ~ being >. In well behaved Euclidean space
situations this implies that there are connected indifference curves
or regions in the xixX j subspace with fixed values of the other
variables. Although all the models discussed in the next several pages
usually have at least the minimal sense of compensatoriness noted
above, some of them will also be seen to exhibit noncompensatory
The most familiar independent evaluative model is probably the
additive utility model that has u i : Xi + Re for i = l, ... ,n with


Acyclicity holds when there are no preference cycles such as

Xl> X2 > ... >x N > Xl. Although noncompensatory lexicographic pref-
erences on finite sets can be represented by additive models (Fishburn,
1970a, p. 49), these models are usually discussed in compensatory
situations. For example, if X = X x ... xX and u.(X.) is a non-
1 n ~ ~
degenerate interval of real numbers for each i in the weak order
context, then the additive model must be compensatory.
Debreu (1960) provided the first general axiomatization of additive
utilities. He assumed that X = X x ... xX , each Xi is a connected and
1 n
separable topological space, > is a continuous strict weak order on X,
and every I is independent of its complement. When n ~ 3 and every
attribute is essential (no ~i is empty), Debreu's axioms imply the
additive model. When n = 2, additivity requires a stronger indepen-
dence assumption to the effect that (x ,a ) > (y ,b ) and (y ,c ) >
12- 1 2 12-
(z ,a ) imply (x ,c ) > (z ,b). Debreu's approach is also discussed
12 12- 12
by Gorman (1968), Koopmans (1972a), and Fishburn (1970a) and a
generalization of his method has been applied to ordinal preferences
over uncertain lifetimes by Fishburn (1978). Algebraically-oriented
alternatives to Debreu's topological additive utility theory have
been developed by Luce and Tukey (1964), Luce (1966), Krantz (1964)
and Krantz et al. (1971).
Axiomatizations for additive utilities when X is a finite set with
> a strict weak order, strict partial qrder, or acyclic, can be found
in Tversky (1964), Scott (1964), Adams (1965), Fishburn (1970a, 1970b)
and Krantz et al. (1971). Several other cases will be mentioned in the
next subsection. The finite-X case requires higher-order independence
axioms that generalize the basic assumption in a manner like the n =2

axiom in the preceding paragraph. The theories mentioned in that

paragraph imply that the u i are unique up to similar positive affine
transformations aU i + b i for i = 1, ... ,n and a > 0; the uniqueness
properties for finite X are generally weaker than this.
Other contributions to the basic additive model are made by
Jaffray (1974), Narens (1974) and Narens and Luce (1976). Sayeki
(1972) discusses the weighted form LWiui(x i ) in which the weights but
not the u i functions change under revisions of the decision maker's
goal orientation. He includes an axiom that allows wi to change sign
under different goal orientations. This is related to sign dependence
mentioned earlier for the multiplicative form. Additional discussion
of Sayeki's model is in Sayeki and Vesper (1973).
I shall note two cases of the additive model when the Xi are
similar. The first occurs when all Xi are essentially identical
except for the index and has

u(x , ... ,x )
I n

so that ui(x i ) = wiP(x i ) for each i. This form arises naturally in the
time-period context with i denoting different periods. The equal-
weights case (no time preference) arises from the additive model when
(XI'" .,xn ) is indifferent to (Xcr(I)'" .,xcr(n)) for any permutation cr
on {l, ... ,n} (Debreu, 1959b; Fishburn, 1970a). Another specialization
is axiomatized by Koopmans (1960) for the denumerable-period setting.
This is the constant discount factor model u(x ,x , ... ) = La i-I p(x i )
I 2
with 0 < a < 1. Other cases based on preference intensity comparisons
will be mentioned later.
The other special form is the weighted linear model

which assumes Xi~Re for all i. A specific example is the linear

criterion function model u(gl (x), ... ,~(x)) = LWjgj(X). With integer
programming in mind, Aumann (1964b) presents axioms in which X is the
set of integer lattice points in the nonnegative orthant of Re n . He
assumes that ~ is reflexive and transitive (a preorder or quasi-order)
and defines x > y iff x ~ y and not (y ~ x), and x ~ y iff x ~ y and
y ~ x. His representation has Lwix i > LWiYi when x > y, and Lwix i
LWiYi when x ~ y. Aumann's key independence axiom is the two-part
linear independence condition

x > y ~ x + Z > y + z, and x ~ y ~ x + z ~ y + z.


The second part of this condition implies the weighted linear model in
the context of Debreu's (1960) additive utility theory when each Xi is
a real interval with the relative usual topology.
The weights of the theories in the preceding paragraph are
arbitrary real numbers. Williams and Nassar (1966) present an
axiomatization that implies positive decreasing weights and is inter-
preted in a cash flows context. Their key independence axiom, which is
similar to Aumann's, says that x t y iff x - y t (0, ... ,0). They
axiomatize a general model in which u(x) = x + a x + a a x +... +
(a a ... a)x with
2 3 n n
° 1 2 2 2 3 3
< a. < 1 for each i, and then show that an
additional temporal consistency axiom implies that all ai are equal.
Independent Compensatory Nontransitive Preferences
Of the two common forms of intransitivity in preference theory--
nontransitive indifference and nontransitive preference--more attention
has been given to nontransitive indifference. This is partly due to
the facts that nontransitive indifference is well suited to single-
attribute situations, and that nontransitive indifference can be
accommodated in several appealing utility models for special types of
strict partial orders. The best known of these are semiorders and
interval orders (Armstrong, 1948, 1950; Luce 1956, 1973; Scott and
Suppes, 1958; Roberts, 1970, 1971; Fishburn, 1970b, 1970c, 1973c;
Mirkin, 1972). We say that> on X is a semiorder if and only if it
is a strict partial order that satisfies the following two conditions
for all x,y,z,w E X:

x > z and y > w q x > w or y > z,

x > z and z > w q x > y or y > w.

If only the first of these conditions is assumed for >, then it is

referred to as an interval order. Semiorders were first defined and
examined by Luce (1956); interval orders were introduced by Fishburn
It can be shown (Scott and Suppes, 1958; Scott, 1964; Fishburn,
1970c; Mirkin, 1972) that, when X is finite, > on X is a semiorder iff
there exists u: X + Re such that

x > y iff u(x) > u(y) + 1, for all X,y E X;

and> on X is an interval order iff there are u: X + Re and ~ from X

into the positive reals such that

x > y iff u(x) > u(y) + ~(y), for all x,y E X.

Extensions for infinite X are discussed by Fishburn (1973c). When

X~X x ... xX , the preceding representations can be given an additive
1 n
utility form by replacing u(x) by LUi (xi)' Axioms for these cases
when X is finite are noted by Fishburn (197Db), and Luce (1973)
presents an infinite-X axiomatization of additive semiorders when
X = X xX .
1 2
In contrast to examples of nontransitive indifference, all
defensible examples of nontransitive preferences that I am aware of
(May, 1954; Davidson et al., 1955; Weinstein, 1968; Tversky, 1969;
Lichtenstein and Slovic, 1971; Schwartz, 1972) are multiattribute
examples. The typical example suggests that preferences between
different pairs of alternatives can be governed by different attributes,
or by different "weightings" of attributes, in such a way that
successive comparisons lead to cyclic preferences. Under the present
heading I will note two models that allow preference cycles and which
have definite compensatory aspects.
The first of these is an additive difference model proposed by
Morrison (1962) and Tversky (1969). Tversky's version takes X =
X x ... xX with
1 n

where u i : Xi + Re and hi is an increasing and continuous real valued

function on a real interval for which hi(-t) = -hi(t). Tversky
suggests that this model can represent situations in which the
individual first compares x and y on each attribute and then adds
these n difference comparisons to arrive at a holistic comparison. He
notes that the additive utility model is the special case of the
additive difference model in which each hi is linear and that, when
n ~ 3, ~ is transitive in his model if and only if all hi are linear.
Since hieD) = D, the model requires each Xi to be independent of the
other attributes. Although Beals et al. (1968) have axiomatized an
additive difference model for similarity judgments, I am not aware of
an axiomatization of the additive difference model for preference
A second independent, compensatory and not necessarily transitive
model has been axiomatized by Luce (1977) for the X = X xX case.
1 2
Luce's model allows additive compensatory action between X and X to
1 2
change to lexicographic dominance by X as the X difference increases.
1 1
The lexicographic part of X in Luce's model is described by a

semiorder » on X defined by x »y iff (x ,x ) ~ (y ,y ) for all

1 11 12 12
x ,y EX. The compensatory part is described by the symmetric
2 2 2
complement C of » on X , where x C Y iff there are a ,b ,c ,d E X
1 1 1 2 2 2 2 2
for which (x ,a ) > (y ,b ) and (y ,c ) >(x,d). With6(x)=
1 2 ~ 1 2 1 2 ~ 1 2 1
sup{u(z)-u(x): z Cx} for all X EX, Luce's representation
1 1 1 1 1 1 1 1

(x ,x ) ~ (y ,y ) iff u (x ) > u (y ) + o(y ) or [-o(x ) < u (x ) -

12 12 1111 1 111

u (y ) < o(y ) and u (x ) + u (x ) > u (y ) +

11 1 12211

u (y )].
2 2

Thus the basic additive model applies when u (x ) - u (y ) E [-0 (x ),

1 1 1 1 1
o(y )], but otherwise X lexicographically dominates X .
1 1 2

Noncompensatory Preferences
In discussing primarily noncompensatory evaluation theories I
shall assume for expositional simplicity that the set X of potential
things to which the relation > might apply is a product set
X xX x •• • xX . Since there seems to be no widely accepted definition
1 2 n
of noncompensatory>, I shall begin with a definition proposed in
Fishburn (1976a).
For each i E {l, ... ,n} let >~ be defined on Xi by

of the other n - 1 attributes.

Note that >~ is different than >i defined earlier and is not predicated
on any notion of independence. We shall say that> is strongly
noncompensatory if and only if preference between any pair of n-tuples
in X is completely determined by the two disjoint subsets of attributes
on which each is better than the other according to the >~. The
question "How much better?" is irrelevant for strongly noncompensatory
Several aspects of this definition are worth noting. First, it
depends in no way on whether> is transitive. Second, it implies that
each X.1 is independent of the other attributes, with >io = >..
forth we write >i in place of >i. Third, it implies the strong
independence feature that holds for additive compensatory models,
namely that every I~{l, ... ,n} is independent of its complement. And

fourth, if the minimal compensatory definition presented earlier is

required to have either (xi'YJ,a) > (Yi,xJ,a) or (yi,xj,b) > (xi,yj,b),
then a strongly noncompensatory> can never be minimally compensatory.
By extending the preference notation to disjoint subsets of
{l, ... ,n}, with I > J iff x> y whenever I = {i: xi >i Yi} and
J = {i: Yi >i xi}' and I ~ J iff x ~ y whenever I = {i: xi >i Yi}
and J = {i: Yi >i xi}' every strongly noncompensatory preference
structure can be efficiently characterized by > and ~ on the subsets.
A structure for which {l} > {2}, {l} > {3} and {2,3} > {l} indicates
that attribute 1 dominates either 2 or 3 by itself and that attributes
2 and 3 together dominate 1 by itself.
As shown in Fishburn (1976a), the preference notation on disjoint
subsets of attributes can be used to characterize a variety of special
types of strongly noncompensatory preference structures. The most
commonly discussed of these is the lexicographic structure, which
obtains if and only if there is a permutation a on {l, ... ,n} such that,
for all x and y in X,

x > y iff not (xi i Yi) for some i, and xa(i) >a(i) Ya(i) for

the smallest i for which not (xa(i) ~a(i) Ya(i))'

Under this definition, Xa(l) is the dominant attribute, Xa(2) is the

next most important attribute, and Xa(n) is the least important
attribute. Fishburn (1975a) shows that a strongly noncompensatory>
is lexicographic if > on X is a strict weak order and for each i there
are xi'Yi and zi such that xi >i Yi >i zi'
Mathematical research on lexicographic preferences derives in large
part from Hausdorff's work (1957) on products of ordered sets. Its
emergence in economics owes much to Georgescu-Roegen (1954, 1968),
Hausner (1954) and Chipman (1960b, 1971). A survey of lexicographic
topics is provided by Fishburn (1974a). This survey includes a
discussion of nontransitive lexicographic preferences, which can arise
when the >i relations are semiorders or interval orders. It also notes
variations that occur when strict adherence to the lexicographic idea
is relaxed (Davidson et al., 1955; Coombs, 1964; Tversky, 1969). An
example of this, which follows ideas of Simon (1955), Georgescu-Roegen
(1954), Encarnaci6n (1964a) and Ferguson (1965), defines> on X in
terms of relations >~ on the Xi for which xi >~ Yi iff Yi is an un-
acceptable or unsatisfactory level of Xi and xi is judged to be better
than Yi (xi might be either satisfactory or unsatisfactory) on the

basis of the ith attribute or criterion. The definition takes x > y

iff xi >1 Yi for some i and this is true for the smallest i for which
either xi >1 Yi or Yi >1 Xi' In this modified scheme, criterion 1 is
the most important criterion and criterion n is the least important.
The preceding definition is fully lexicographic in terms of the
>1 relations, and> as thus defined is a strict weak order on X.
Several closely related models, which have been discussed by Coombs
(1964), Dawes (1964) and Einhorn (1970), among others, are designed to
partition the alternatives in X into an acceptable subset A and an un-
acceptable subset X\A. When each Xi is partitioned into an acceptable
subset Ai and its unacceptable complement Xi\A i , the general model
under consideration has x E A iff {i: x.l E A.} l
is contained in a
specified nonempty family F of nonempty acceptable subsets of
{l, ... ,n}. If F = {{l, ... ,n}} then the model is conjunctive with x
acceptable iff every xi is acceptable. On the other hand, if F
contains all nonempty subsets of {l, ... ,n} then the model is said to
be disjunctive. From an evaluative viewpoint, each model of this type
(one for F) establishes a strict weak order on X that has at
most two indifference classes, namely A and X\A.
Although the generic F model of the preceding paragraph is not
strongly noncompensatory under our earlier definition, it is sometimes
referred to as noncompensatory. Indeed, if we take xi >i Yi iff
xi ~ Ai and Yi E Xi\A i , along with x > y iff x E A and y ~ X\A, then
under a reasonable regularity condition on F it follows that a
reversal of preferences of the form x > y and w > z is impossible when
{i: xi >i Yi} = {i: zi >i wi} and {i: Yi >i xi} = {i: wi >1 zi}'
Hence the conjunctive, disjunctive and other F models have a very
definite noncompensatory flavor even though they are not strongly non-
Preference Intensity Comparisons
We conclude this section with remarks on preference intensity
comparisons. These comparisons may be either holistic or conditioned
on a particular attribute or criterion. For example, let >* and >!
be binary relations on XxX. Then (x,y) >* (z,w) could mean that
degree of preference for x over y exceeds degree of preference for z
over w, and (x,y) >I (z,.w) could indicate that the difference in
preference between x and y on the basis of criterion i exceeds the
difference in preference between z and w on the basis of criterion i.
Some choice models that identify efficient sets of alternatives from
dominance or outranking relations have overtones of the latter type of
comparison. The outranking models developed by Roy (1971, 1973, 1974)

and others are a case in point. The usages of the scoring functions gj
in these models strongly suggests a degree-of-preference orientation.
The basic theory of preference-difference comparisons extends
from the ordered metric rankings of Coombs (1964), Siegel (1956) and
Fishburn (1964) through a number of theories (Frisch, 1926; Alt, 1936;
Suppes and Winet, 1955; Suppes and Zinnes, 1963; Pfanzagl, 1959) that
imply the existence of a real valued u on X such that

(x,y) >* (z,w) iff u(x) - u(y) > u(z) - u(w), for all x,y,z,w E X.

Inexact or vague degree-of-preference theories are discussed by Adams

(1965) and Fishburn (1970d). Several of these theories as well as
others are discussed in Fishburn (1970a, Chapter 6) and Krantz et al.
(1971, Chapter 4). Their mathematical structures are similar to
those in two-attribute additive utility theories; the key difference-
comparison axioms are like the independence axioms in the n = 2
additivity theories.
When u satisfies the utility difference representation of the
preceding paragraph and X = X x ... xX , it may be possible to express u
1 n
in an additive way as u(x) = EUi(x i ). Axioms for this case are
presented by Krantz et al. (1971, p. 492) and Dyer and Sarin (1977).
The latter authors begin from the perspective of an additive repre-
sentation and ask what must be true so that u = EU i can be interpreted
in a meaningful way as a function whose differences preserve preference
intensities. The opposite approach begins with the preceding dif-
ference representation and asks what must be true so that its u can be
written in the additive form. It is easily seen (Fishburn, 1970a,
p. 93) that this can be done if and only if there is a fixed element
(e 1 , ... ,e n ) in X such that, whenever i E {l, ... ,n} and Xj = Yj and
Zj = Wj for all j 'i i, (x,y) >* (z,w) iff «xi,e j for j 'i i), (Yi'e j
for j 'i i)) >* « z i ' e j for j 'i i), ( wi' e j for j 'i i)).
The utility difference representation in which u(x) can be written
as EUi(x i ) can be further specialized when all Xi are the same except
for their indices. Fishburn (1970a, Chapter 7) presents axioms based
on Debreu's topological approach which imply that u(x) can be written
as EWiP(x i ) with the Wi > O. An additional stationarity axiom, which
says that (x , ... ,x ,e) > (y , ... ,y ,e) iff (e ,x , ... ,x 1) >
1 n-1 0 1 n-1 0 0 1 n-
(e ,y , ... ,y ) for some fixed e , then implies the constant discount
o 1 n-1 i-1 0
rate form in which u(x) = Ea p(x i ). In the stationarity axiom
(Koopmans, 1960), which is similar to the temporal consistency axiom of
Williams and Nassar (1966), > is defined from >* by x > y iff
(x,y) >* (y,y). In general it is customary to define the basic pref-
erence relation in this way when >* on XxX is taken as the primitive


This section discusses evaluation theories in which the alter-
natives are probability measures p,q, ... in a set P of measures defined
on an algebra of subsets of a set X of decision consequences. For
expositional simplicity the measures in P will usually be referred to
as probability distributions or gambles on X. Even when X is not multi-
attribute, the probabilities and consequences constitute two primary
attributes so that the situation is essentially a multiattribute
The theories of the present section will be divided into three
main classes. The first class consists of special theories for a von
Neumann-Morgenstern utility function (1947) when X is equal to or a
subset of a product set X x ... xX. The second class contains a
I n
variety of stochastic dominance theories that for the most part assume
that X~Re and that consequence x is preferred to consequence y when
x > y. The third class involves a number of other theories of
comparison when X~Re and preference increases in x.
Multiattribute Expected Utility Theories
Throughout this subsection we shall assume that a preference
relation> on P satisfies- the expected utility model

p > q iff Ju(x)dp(x) > Ju(x)dq(x), for all p,q E P,


where u is a real valued utility function on X for which Judp is finite

and well defined for all pEP. Axioms for various cases of this
model are presented by von Neumann and Morgenstern (1947), Marschak
(1950), Herstein and Milnor (1953), Blackwell and Girshick (1954),
DeGroot (1970) and Fishburn (1970a, 1976b) among others. A brief
review of these and other expected utility theories, including ones
based on partial orders and lexicographic utilities, is given by
Fishburn (1977b). Also see Fishburn (1974a) for more on lexicographic
expected utility.
The first special form of a von Neumann-Morgenstern utility
function on multiattribute consequences that was axiomatized was the
additive form u(x) = ~ui(x1). This was done independently by Fishburn
(1965a) and Pollak (1967) for X = X x •.. xX. Later Fishburn (1971)
I n

proved that, when X is an arbitrary subset of X x ... xX , u can be

I n
written additively if and only if p ~ q whenever p and q are two
gambles in P (the set of simple measures on X) such that the marginal
distribution of p on Xi equals the marginal distribution of q on Xi
for each i. When X = X x ... xX , it suffices to express this
I n
condition in terms of simple 50-50 gambles (Fishburn, 1965aj Raiffa,
1969) .
A different independence notion that is more similar to the idea
of independence in section 4 was introduced by Pollak (1967), Keeney
(1968, 1971, 1972), Raiffa (1969) and Meyer (1970'). Usually referred
to as utility independence, it says that I~{l, .•. ,n} is utility
independent of its complement I C if and only if the preference order
over probability distributions on the product of the attributes in I
conditioned on fixed values of the attributes in I C does not depend
on the fixed values of these other attributes. When I is utility
independent of I C, and when X = X x ... xX and P includes the simple
I n
measures on X, it then follows from the basic expected utility theory
that there are real valued functions va and vb on the product of the
Xi for i E I C with va > 0 and a real valued function w on the product
of the Xi for i E I such that

u(x1 , ••• ,x n ) = va(x i for i E IC)w(x i for i E I) + vb(x i for i E I C).

If a sufficient number of the I~{l, ... ,n} are utility independent of

their complements (see previous references for details) it follows that
u on X = X x ... xX is either additive or else has an essentially
I n
multiplicative form u(x) = u (x ) ••• u (x ) in which each u i has
I Inn
constant sign. More complex combinatorial forms for u arise when
utility independence applies to more restricted I families.
Generalizations of utility independence have been considered for
the case in which X is an arbitrary subset of X xX (Fishburn, 1976b)
I 2
and for X = X xX x ... xX with complete reversals and empty orders
I 2 n c
allowed when the fixed values of the attributes in I are changed
(Fishburn, 1974bj Fishburn and Keeney, 1974, 1975). The latter work
ties into the sign dependence condition for multiplicative utilities in
the nonprobabilistic context and has the effect of allowing va in the
preceding paragraph to change sign or equal zero. The former
generalization illustrates the difficulties in obtaining the decomposed
form of the preceding paragraph when X is an infinite subset of X xX .
I 2
Utility independence breaks down when the individual's risk
attitude towards I (Keeney, 1973; Pollak, 1973) depends on the fixed
values of the other attributes. However, more complex independence

conditions can accommodate changes in conditional risk attitude. An

example is a bilateral independence notion (Fishburn, 1973d, 1974b,
1977a) that uses two sets of fixed values for the attributes in I C
rather than one set as in utility independence. A general system of
fractional independence conditions that can use more than two sets of
fixed values for the conditioning attributes has been developed by
Farquhar (1975, 1976). Farquhar's theory is presently the most
general independence theory for multiattribute expected utility. It
gives rise to a great variety of specialized forms for u(x , ... ,x )
I n
and includes utility independence and bilateral independence as
special cases.
In addition to the independence theories mentioned above, we note
that a closely related body of theory has been developed specifically
for the time-stream context (Fishburn, 1965b, 1970a; Meyer, 1970,
1977; Bell, 1974; Keeney and Raiffa, 1976, Chapter 9). This includes
forms for u such as u(x) = Ea i-I p(x i ) that are designed for the
homogeneous product set context as well as more general additive and
multiplicative representations. In somewhat different veins, Kirkwood
(1976) has presented a notion of parametrically dependent preferences
and Fishburn (1977c) discusses the use of approximation theory
(Cheney, 1966; Lorentz, 1966) in estimating u(x ,x ).
I 2
Additional coverage of multiattribute expected utility theories
is provided by Farquhar (1977), Fishburn (1977a) and Keeney and
Raiffa (1976).
Stochastic Dominance Comparisons
Since a major problem in using expected utility theory is the
difficulty of accurately assessing the decision maker's utility
function, attention has been given to comparisons of distributions that
are based on limited information about u. If Sudp ~ Sudq for every u
that satisfies the limited information then p is said to stochastically
dominate q with respect to that information. More precisely, suppose
that what is known abou·t the decision maker's utility function within
the expected utility context can be described by a set U of real
valued functions on X such that every u E U satisfies the given data
and every u ~ U violates the data. Then we say that p stochastically
dominates q with respect to U, and write p ~U q, if and only if
Judp ~ Judq for all u E U. And p strictly stochastically dominates
q with respect to U, or p >U q, if and only if p ~U q and not (q ~Up)·
In several interesting cases ~U can be conveniently stated in
terms of the distributions without direct reference to U. Hence some
definitions of stochastic dominance relations are based directly on

properties of the distributions. Examples will be given momentarily.

The viewpoint on stochastic dominance expressed here follows the
general treatment proposed by Brume11e and Vickson (1975) and Fishburn
(1975c). A comprehensive introduction to the theoretical side of
stochastic dominance is provided by Fishburn and Vickson (1977), and
Whitmore and Findlay (1977) includes a number of chapters on
applications and implementation. The rest of this subsection examines
stochastic dominance when X~Re, when X~Ren and when X is arbitrary.
When X~Re, it is convenient to work with cumulative distribution
functions F and G rather than with their underlying measures p and q.
General definitions of first degree stochastic dominance (FSD,
represented by > ) and second degree stochastic dominance (SSD,
represented by > ) in the real line context are provided by

F >
G iff F(x) 2 G(x) for all x ERe,
F > G iff JXF(y)dy 2 JXG(y)dy for all x ERe.

To avoid certain technical problems (see Tesfatsion, 1976; Fishburn

and Vickson, 1977) we shall assume that X is a closed and bounded
interval and that X includes the supports of F and G. It can then be
shown that F ~U G iff F ~I G when U is the class of all strictly
increasing functions on X, or the class of all nondecreasing functions
on X, or some appropriately rich subset of one of these. The state-
ment "F ~U G iff F ~I Gil is a typical stochastic dominance theorem
that relates uniform expected utility comparisons (JudF ~ JUdG for
all u E U) to properties of F and G (F(x) 2 G(x) for all x). Although
FSD is often defined in terms of a U class rather than by -> I , the
fact that different U classes (essentially involving nondecreasing
preferences over X) give equivalence to > lends some support to the
definition used above. The type of FSD equivalence theorem noted
above appears to have been independently arrived at by Lehmann (1955),
Quirk and Saposnick (1962) and Fishburn (1964).
Second degree stochastic dominance is associated with nondecreasing
concave utility functions on X and corresponds to the notion of risk
aversion (Pratt, 1964; Arrow, 1965). A typical SSD theorem says that
F ~U G iff F ~ G when U is the class of all nondecreasing concave u
on X. Here again several authors, including Hardy et a1. (1934),
Fishburn (1964) and Hadar and Russell (1969) have independently
discovered this type of result. Other FSD/SSD references include
Hanoch and Levy (1969), Rothschild and Stiglitz (1970) and Hadar and

Russell (1971). Fishburn (1974c) discusses stochastic dominance

between convex cpmbinations of distributions.
Additional developments of stochastic dominance for the X~Re
context are discussed by Whitmore (1970), Vickson (1975a, 1975b, 1976,
1977), Bawa (1975), Fishburn (1976c) and Meyer (1977).
Turning next to the multiattribute consequence context, we shall
assume that X is a rectangular (product of intervals) subset of Re n .
For the FSD case let V be the set of all nondecreasing Borel mea-
surable functions on X and define F ::1 G iff JUdF .:. JUdG for all u E VI
where F and G are multivariate distribution functions on Re n . The
most general FSD results for this context appear to be those obtained
by Lehmann (1955). His general theorem says in effect that F > G iff

JBdF .:. JBdG for every increasing (x E B, h .:. 0, x + hEX. x + h E B)

Borel subset B of X. This result has recently been rediscovered by
Levhari et al. (1975). If the distributions are independent in their
marginals in the sense that F(x , ... ,x ) : F (x ) ... F (x ), and
I n I Inn
similarly for G, then Lehmann shows that F ~I> G iff F. > G. for each
l -I l
i. Some other contributions to the FSD case have been made by Levy
and Paroush (1974a, 1974b).
For the multidimensional SSD case let V be the set of all non-
decreasing and concave Borel measurable functions on X and define
F > G iff j'udF > j'udG for all u E V. Various results for this case
~2 - 2
have been obtained by Sherman (1951), Strassen (1965) and Veinott (see
Bessler and Veinott, 1966), and more recently by
Levhari et al. (1975), Peleg (1975) and Brumelle and Vickson (1975).
The basic theorem in this area (Strassen, 1951; Brumelle and Vickson,
1975) says that if X is bounded with ~ and 9 the random vectors for
F and G respectively, then F ::2 G if and only if there is a random
vector 2 such that g is equal in distribution to 2 + z and the
expected value of 2 given ~ is nonpositive (~ (0, ... ,0» with prob-
ability 1. In the independence context with the means of F and G
finite, we get F ::2 G iff Fi ':'2 Gi for each i (Fishburn and Vickson,
1977) .
Finally, consider the case where X is arbitrary so that there is
no natural order (complete or partial) on its elements. Fishburn
(1964, 1974d, 1975d) shows how the basic ideas of stochastic dominance
can be used when the order of X is taken to be the decision maker's
preference order. For example, if :: is a weak order on X and p and q
are simple distributions with combined support A~X, then ~u(x)p(x) ~
~u(x)q(x) for all u that preserve:: on A iff p({x: a:: x}) ~
q({x: a:: x}) for all a E A. A large variety of similar results for
other types of information about u are presented in the aforementioned
references. Fishburn (1977d) also proposes a definition of stochastic
dominance for the case in which the decision maker's preference
relation on the consequences may be intransitive.
Other Theories for Univariate Gambles
Our purpose in the rest of this section is to summarize multi-
attribute theories for the comparison of risky prospects that are not
directly based on expected utility. It will be assumed that X~Re with
X a set of potential returns on investment, net profits, or some other
variable for which preference increases in x.
The main class of theories under this heading are mean-risk
theories that, except for a general conception of risk discussed by
Coombs (1974) and Coombs and Huang (1968), assume that a larger mean
or expected return is preferred to a smaller mean, and a smaller risk
is preferred to a larger risk. For distribution F we shall let ~(F)
denote the mean of F and take R(F) as some real valued measure of the
risk of F. The models in this class can be differentiated in two main
ways: (1) the particular form of R(F) that is used, and (2) whether
the model is a dominance model, a completely ordered compensatory
model, or a completely ordered lexicographic model in which either the
risk measure or the mean dominates.
The most common risk measure discussed in the literature is the
variance cr 2 (F) or standard deviation cr(F) of distribution F
(Markowitz, 1952, 1959; Tobin, 1965; Sharpe, 1964; Lintner, 1965).
This has been modified in two ways by subtraction of the mean. Baumol
(1963) proposes R(F) = Kcr(F) - ~(F) with K > 0 (see also Bickel, 1969
and Agnew et a1., 1969), and Po11atsek and Tversky (1970) axiomatize
the measure R(F) = 6cr 2 (F) - (1 - 6)~(F) with 0 < 6 < 1. Other
measures of risk focus on low outcomes. These include two forms of
semivariance (Markowitz, 1959; Mao, 1970a, 1970b; Hogan and Warren,
1972, 1974; Porter, 1974), a generalized below-target measure of risk
(Fishburn, 1977e), the probability of ruin, the probability of loss
(Markowitz, 1959; Pruitt, 1962), and a weighted loss measure used by
Domar and Musgrave (1944). Stone (1973) presents a general model that
includes most of these measures as special cases.
The mean-risk dominance model says that distribution F strictly
dominates distribution G if and only if ~(F) ~ ~(G), R(F) ~ R(G), and
at least one of the inequalities is strict. The dominance relation in
this case is a strict partial order. When the dominance model is used,
the objective is to identify the efficient (undominated) set of feasible
risky prospects. The greatest success in this regard has been with the
mean-variance model used by Markowitz and others. This model, in both
its dominance and tradeoff forms, has been criticized on certain
logical grounds by several writers, including Borch (1963, 1969, 1974),
Feldstein (1969), Chipman (1973), Levy (1974) and Fishburn (1975e).
The compensatory mean-risk tradeoff model assumes that there is a
real valued utility function u on (~,R) pairs such that u increases in
~, decreases in R, and has

F > G iff u(~(F), R(F» > u(~(G),R(G».

The preceding dominance relation is included in the preference

relation >, which is a strict weak order in the present case. On
occasion the utility function is further specialized, as in Van
Moeseke's (1963, 1965) model u(~(F),cr(F» = ~(F) + mcr(F). Questions
about ~ - cr tradeoff curves and congruence with expected utility for
the compensatory u(~,cr) model are discussed by Samuelson (1967, 1970),
Samuelson and Merton (1974), Feldstein (1969), Tsiang (1972, 1974),
Chipman (1973), and Levy (1974), among others. Tradeoff models that
associate R with low returns have been discussed by Mao (1970a, 1970b),
Conrath (1973), Fishburn (1977e) and Libby and Fishburn (1977).
When R(F) denotes the probability of ruin for distribution F, the
lexicographic mean-risk model with risk dominant has F > G iff
R(F) < R(G) or [R(F) = R(G) and ~(F) > ~(G)l. A related lexicographic
model seeks to maximize expected return subject to probability of ruin
or failure not exceeding a level specified by the decision maker
(Reder, 1947; Roy, 1952; Shubik, 1961; Encarnaci6n, 1964b; Agnew et al.,
1969; Machol and Lerner, 1969; Joy and Barron, 1974), and Conrath
(1973) recommends a hybrid model that maximizes utility in a mean-risk
compensatory model subject to probability of ruin being acceptably
small. Fishburn (1975e) shows that a mean-risk lexicographic model in
which the mean is dominant can be logically implied by assumptions
that lie behind the mean-variance approach.
In addition to the mean-risk models mentioned above, choices and/
or preferences among univariate gambles have been examined on other
bases. These include higher order moments in addition to the mean and
variance (Lichtenstein, 1965; Alderfer and Bierman, 1970; Tsiang, 1972;
Payne, 1973) and a linear model for special types of gambles in which
the attractiveness of a gamble is a weighted sum of probability of
winning, probability of losing, amount that may be won, and amount that
may be lost (Slovic, 1967; Slovic and Lichtenstein, 1968, 1971;
Rapoport and Wallsten, 1972; Payne, 1973, 1975).


The third category of our basic trichotomy of evaluation theories
views each decision alternative or act as a function f that assigns a
consequence in X to each state in a set S of exclusive and exhaustive
states of the world (Savage, 1954). If S = {s , ... ,s } then, with
1 n
f(si) = xi for each i, an act can be thought of as an n-tuple
ex , ... ,x ) in Xn. In any case, f(s) denotes the consequence in X
1 n
that obtains when s is the true state of the world.
The most widely accepted normative theory for decision making under
uncertainty is the Ramsey-Savage personalistic expected utility theory
(Ramsey, 1931; Savage, 1954). This theory was first completely
axiomatized by Savage within an infinite-states context (see also
Fishburn (1970a». Savage's axioms imply the existence of a finitely-
additive probability measure n on 2S and a real valued utility function
u on X such that, for all f,g: S + X,

f > g iff ju(f(s»dn(s) > Ju(g(s»dn(s).


Here n is the individual's personal or subjective probability measure

on Sand u is his utility function on the consequences. Subsequent
axiomatizations of this and related models for subjective expected
utility have been presented by Suppes (1956), Davidson and Suppes
(1956), Pratt et al. (1964), Jeffrey (1965), Bolker (1966, 1967),
Luce and Krantz (1971), Fishburn (1970a, 1973e) and Balch and Fishburn
(1974), among others. Several of these (Jeffrey, Bolker, Luce and
Krantz, Balch and Fishburn) are conditional models in which the
decision maker's probabilities depend on the act selected. In Savage's
model, n is independent of the acts.
If the decision maker's probabilities are presumed to be known
then the present situation maps into the context of the preceding
section since each f induces a probability distribution on X. On the
other hand, if the utilities but not the probabilities are assumed to
be known precisely, then we encounter a situation that is dual to the
stochastic dominance approach of the preceding section. Fishburn
(1964, 1965c) and Barron (1973) examine various types of information
about n and show what must be true of u so that Ju(f(s»dn(s) ~
Ju(g(s»dn(s) for all n that satisfy the given information. Fishburn
(1964, Chapter 11) also examines the case where X = X x ... xX and u is
1 n
additive over the attributes in the context of incomplete information
on utilities and/or probabilities. It is shown that many of the
uniform expected utility comparison problems can be viewed as linear

programming problems. Other approaches for comparing subjective

expected utilities when the decision maker's probabilities are not
fully known are discussed in Fishburn et al. (1968).
A special case of the Ramsey-Savage approach for the finite-S
setting is the so-called Laplace criterion (1814) for which the states
are regarded as equally likely. This is sometimes defended on the
basis of the principle of insufficient reason (see, for example,
Fishburn, 1964, pp. 140-143), and axiomatizations of the criterion
have been given by Chernoff (1954) and Milnor (1954).
The other primary methods for comparing acts in a finite states
formulation are non-probabilistic approaches. The four main approaches
under this heading are maximin (Wald, 1950), maximax, Hurwicz-a, and
minimax loss (Savage, 1951). Summaries and criticisms of these
approaches have been presented by a number of writers, including
Goodman (1954), Milnor (1954), Luce and Raiffa (1957), Ackoff (1962),
Fishburn (1966) and MacCrimmon (1973). Milnor's article is especially
useful in that it differentiates among the approaches axiomatically.
Maximin says that f is better than g if mini u(f(si)) > mini
u(g(si)); maximax says that f is better if maxi u(f(si)) > maxi
u(g(si)); Hurwicz-a for 0 2 a ~ 1 has f over g if amax i u(f(si)) +
(1 - a) mini u(f(si)) > amax i u(g(si)) + (1 - a) mini u(g(si)); and
minimax loss has f over g within the context of a set A of acts if
maxi[suPA u(a(si)) - u(f(si)Y] < maxi [suPA u(a(si)) - u(g(si))]'
The first two of these are based solely on simple preference comparisons
between consequences but the latter two have obvious cardinal utility
implications. Although all four can give different orderings of a set
A of acts, Savage (1954, p. 170) says that Wald's actual use of
maximin is equivalent to his own minimax loss approach. Extensive
discussions of this approach in statistical decision theory, including
its application to mixed acts or randomized strategies, are given by
Wald (1950) and Savage (1951, 1954). It is of course closely related
to the minimax solution approach for zero-sum games (von Neumann and
Morgenstern, 1947; Luce and Raiffa, 1957).

In this final section I shall indicate briefly some of the main
themes of assessment methodologies and provide an introduction to its
Most of the assessment procedures that relate to the evaluative
theories surveyed in earlier sections are concerned with the estimation
of either subjective probabilities or utilities and/or criteria

weights. Major aspects of the assessment of subjective or personal

probability are discussed in the early survey by Edwards (1954), the
important assessment articles by Winkler (1967a, 1967b), and the more
recent surveys by Savage (1971), Slovic and Lichtenstein (1971), and
Hogarth (1975). Various difficulties and biases that can affect
judgments of personal probabilities are noted in these works and in
Winkler and Murphy (1973) and Tversky and Kahneman (1974). The latter
review is backed up by a series of interesting studies on specific
sources of bias (Tversky and Kahneman, 1973; Kahneman and Tversky,
1972, 1973). Probability assessment is of course intimately concerned
with the evaluative theories discussed in sections 5 and 6.
Extensive coverage of multiattribute/multicriterion utility
assessment is provided by Fishburn (1967), Raiffa (1969), Slovic and
Lichtenstein (1971), MacCrimmon (1973), Green and Wind (1973), Huber
(1974), Kneppreth et al. (1974), Tell (1976), Johnson and Huber (1976)
and Keeney and Raiffa (1976). A rather large proportion of this
material is concerned with the additive utility form u(x , ... ,x ) =
1 n
~ui(xi) in both the nonprobabilistic and the risky settings, although

Raiffa (1969) and Keeney and Raiffa (1976) extensively discuss the
multiplicative and other algebraic forms in the risky context of
section 5. Assessment procedures that do not necessarily presuppose
a decomposed form for a multiattribute utility function include
holistic procedures (Slovic and Lichtenstein, 1971; Fischer, 1977),
tradeoff methods (Thurstone, 1931; MacCrimmon and Toda, 1969; Mac-
Crimmon and Siu, 1974), and approximate fits to pairwise assessment
data (Smith et al. 1974). Some specialized procedures designed for
interactive programming methods are discussed in the references in
the penultimate paragraph of section 2.
The additive utility form has been considered in the general
~ui format as well as in specialized forms that are available when each

Xi is a set of real numbers. The letter include the weighted ~inear

model ~wixi (Gulliksen, 1956; Srinivasan and Shocker, 1973b; Srinivasan
~al. 1973; Dawes and Corrigan, 1974; Einhorn and Hogarth, 1975) and
the weighted Euclidean distance model with ideal point (Srinivasan and
Shocker, 1973a; Pekelman and Sen, 1974).
Two general approaches are used to estimate the u i in the general
case. First, each u i might be assessed separately up to compensating
scale weights (Galanter, 1962; Fishburn, 1967; Edwards, 1972; Keeney
and Raiffa, 1976; Fischer, 1977). Meyer and Pratt (1968) and Bradley
and Frey (1975) discuss u i fits to limited data for a real variable
in the expected utility context. Second, one key u i might be assessed

directly with the others determined from it through some form of

tradeoff data (Fishburn, 1967; MacCrimmon and Siu, 1974).
In the special Euclidean mod~ls only the criterion or aspect
weights may require estimation although functional forms and the
location of an ideal point may also be at issue. Many studies have
focussed on the weighting problem, including Churchman and Ackoff
(1954), Eckenrode (1965), Stimson (1969), Edwards (1972), Srinivasan
and Shocker (1973a, 1973b), Srinivasan et al. (1973), Pekelman and
Sen (1974) and Fischer (1977). Dawes and Corrigan (1974), Einhorn
and Hogarth (1975) and Wainer (1976) conclude that the weights in the
linear model may make little practical difference in certain types of
decision situations. Their work also suggests that a linear model of
a decision maker's selection process in a repetitive situation may do
better against an external criterion of success than the decision
maker himself. This has given rise to a man versus model of man
controversy. Recent contributions to this issue have been made by
Libby (1976a, 1976b) and Goldberg (1976).

Ackoff, R. L. Scientific Method: Optimizing Applied Research Decisions.
New York: Wiley, 1962.
Adams, E. W. Elements of a theory of inexact measurement. Philosophy
of Science 32 (1965), 205-228.
Agnew, N. H., Agnew, R. A., Rasmussen, J. and Smith, K. R. An
application of chance constrained programming to portfolio
selection in a casualty insurance firm. Management Science
15 (1969), B512-B520.
Alderfer, C. P. and Bierman, H., Jr. Choices with risk: beyond the
mean and variance. Journal of Business 43 (1970), 341-353.
Alt, F. Uber die Messbarkeit des Nutzens. Zeitschrift fUr National-
okonome 7 (1936), 161-169.
Armstrong, W. E. Uncertainty and the utility function. Economic
Journal 58 (1948), 1-10.
Armstrong, W. E. A note on the theory of consumer's behavior.
Oxford Economic Papers 2 (1950), 119-122.
Arrow, K. J. Social Choice and Individual Values. New York: Wiley,
Arrow, K. J. ASiects of the Theory of Risk Bearing. Helsinki: Yrjo
Jahssonin S litiS, 1965.
Arrow, K. J. and Scitovsky, T. (eds.) Readings in Welfare Economics.
Homewood Illinois: Irwin, 1969.
Aumann, R. J. Utility theory without the completeness axiom.
Econometrica 30 (1962), 445-462.
Aumann, R. J. Utility theory without the completeness axiom: a
correction. Econometrica 32 (1964a), 210-212.

Aumann, R. J. Subjective programming. In Shelly, M. W. and Bryan,

G. L. (eds.) Human Judgments and Optimality. New York: Wiley,
1964b, 217-242.
Balch, M. S. and Fishburn, P. C. Subjective expected utility for
conditional primitives. In Balch, M. S., McFadden, D. L. and
Wu, S. Y. (eds), Essays on Economic Behavior under Uncertainty.
Amsterdam: North-Holland, 1974, 57-69.
Barron, F. H. Using Fishburn's techniques for analysis of decision
trees: some examples. Decision Sciences 4 (1973), 247-267.
Baumol, W. J. An expected gain-confidence limit criterion for
portfolio selection. Management Science 10 (1963), 174-182.
Bawa, V. S. Optimal rules for ordering uncertain prospects.
Journal of Financial Economics 2 (1975), 95-121.
Beals, R., Krantz, D. H. and Tversky, A. The foundations of multi-
dimensional scaling. Psychological Review 75 (1968), 127-142.
Bell, D. E. Evaluating time streams of income. OMEGA 2 (1974), 691-
Benayoun, R., de Montgolfier, J., Tergny, J. and Laritchev, O. Linear
programming with multiple objective functions: step method
(STEM). Mathematical Programming 1 (1971), 366-375.
Benayoun, R. and Tergny, J. Criteres multiples en programmation
mathematique: une solution dans le cas lineaire. Revue
Francaise d'Informatique et de Recherche Operationnerre-3 (1969),
Benson, H. P. and Morin, T. L. The vector maximization problem: proper
efficiency and stability. SIAM Journal on Applied Mathematics 32
(1977), 64-72.
Bergstresser, K., Charnes, A. and Yu, P. L. Generalization of
domination structures and nondominated solutions in multicriteria
decision making. Journal of Optimization Theory and Applications
18 (1976), 3-13.

erationnelle 5

Bessler, S. A. and Veinott, A. F. Optimal policy for dynamic multi-

echelon inventory models. Naval Research Logistics Quarterly 13
(1966), 355-389.
Bickel, S. H. Minimum variance and optimal asymptotic portfolios.
Management Science 16 (1969), 221-226.
Blackwell, D. and Girshick, M. A. Theory of Games and Statistical
Decisions. New York: Wiley, 1954.
Bolker, E. D. Functions resembling quotients of measures. Transactions
of the American Mathematical Society 124 (1966), 292-312.
Bolker, E. D. A simultaneous axiomatization of utility and subjective
probability. Philosophy of Science 34 (1967), 333-340.
Borch, K. A note on utility and attitudes to risk. Management Science
9 (1963), 697-700.
Borch, K. A note on uncertainty and indifference curves. Review of
Economic Studies 36 (1969), 1-4.
Borch, K. The rationale of the mean-standard deviation analysis:
comment. American Economic Review 64 (1974), 428-430.

Boyd, D. A methodology for analyzing decision problems involving

complex preference assessments. Ph.D. dissertation, Stanford
University, 1970.
Bradley, S. and Frey, S. C., Jr. Bounds for preference function
assessment. Management Science 21 (1975), 1308-1319.
Brumelle, S. L. and Vickson, R. G. A unified approach to stochastic
dominance. In Ziemba, W. T. and Vickson, R. G. (eds.),
Stochastic Optimization Models in Finance. New York: Academic
Press, 1975, 101-113.
Burness, H. S. Impatience and the preference for advancement in the
timing of satisfactions. Journal of Economic Theory 6 (1973),
Burness, H. S. On the role of separability assumptions in determining
impatience implications. Econometrica 44 (1976), 67-78.
Charnes, A. and Cooper, W. W. Management Models and Industrial
Applications of Linear Programming. New York: Wiley, 1961.
Charnes, A. and Cooper, W. W. Goal programming and constrained
regression - a comment. OMEGA 3 (1975), 403-409.
Charnes, A. and Cooper, W. W. Goal programming and multiple objective
optimizations. Part 1. European Journal of Operational Research
1 (1977), 39-54.
Charnes, A., Cooper, W. W., Klingman, D. and Niehaus, R. J. Explicit
solutions. in convex goal programming. Management Science 22
Cheney, E. W. Introduction to Approximation Theory. New York:
McGraw-Hill, 1966.
Chernoff, H. Rational selection of decision functions. Econometrica
22 (1954), 422-443.
Chipman, J. S. Stochastic choice and subjective probability. In
Willner, D. (ed.), Decisions, Values and Groups, Vol. 1. New York:
Pergamon Press, 1960a, 70-95.
Chipman, J. S. The foundations of utility. Econometrica 28 (1960b),
Chipman, J. Discussion. American Economic Review 56 (1966), 45-47.
Chipman, J. S. On the lexicographic representation of preference
orderings. In Chipman, J. S., Hurwicz, L., Richter, M. K. and
Sonnenschein, H. F. (eds.), Preferences, Utility, and Demand.
New York: Harcourt Brace Jovanovich, 1971, 276~288.
Chipman, J. S. The ordering of portfolios in terms of mean and
variance. Review of Economic Studies 40 (1973), 167-190.
Chipman, J. S., Hurwicz, L., Richter, M. K. and Sonnenschein, H. F.
Preferences, Utility, and Demand. New York: Harcourt Brace
Jovanovich, 1971.
Churchman, C. W. and Ackoff, R. L. An approximate measure of value.
Operations Research 2 (1954), 172-187.
Coombs, C. H. A Theory of Data. New York: Wiley, 1964.
Coombs, C. H. Portfolio theory and the measurement of risk.
University of Michigan, MMPP 74-16, 1974.
Coombs, C. H. and Huang, L. C. A portfolio theory of risk preference.
University of Michigan, MMPP 68-5, 1968.

DaCunha, N. O. and Polak, E. Constrained minimization under vector-

valued criteria in finite dimensional spaces. Journal of
Mathematical Analysis and Applications 19 (1967), 103-124.
Davidson, D., McKinsey, J. C. C. and Suppes, P. Outlines of a formal
theory of value, I. Philosophy of Science 22 (1955), 140-160.
Davidson, D. and Suppes, P. A finitistic axiomatization of
subjective probability and utility. Econometrica 24 (1956),
Davis, O. A., Hinich, M. J. and Ordeshook, P. C. An expository
development of a mathematical model of the electoral process.
American Political Science Review 64 (1970), 426-448.
Davis, O. A., DeGroot, M. H. and Hinich, M. J. Social preference
orderings and majority rule. Econometrica 40 (1972), 147-158.
Dawes, R. M. Social selection based on multidimensional criteria.
Journal of Abnormal and Social Psychology 68 (1964), 104-109.
Dawes, R. M. and Corrigan, B. Linear models in decision making.
Psychological Bulletin 81 (1974), 95-106.
Debreu, G. Representation of a preference ordering by a numerical
function. In Thrall, R. M., Coombs, C. H. and Davis, R. L.
(eds.), Decision Processes. New York: Wiley, 1954, 159-165.
Debreu, G. Theory of Value. New Haven, Connecticut: Yale University
Press, 1959a.
Debreu, G. Cardinal utility for even-chance mixtures of pairs of
sure prospects. Review of Economic Studies 26 (1959b), 174-177.
Debreu, G. Topological methods in cardinal utility theory.
Mathematical Methods in the Social Sciences 1959. Stanford,
California: Stanford University Press, 1960, 16-26.
Debreu, G. Continuity properties of paretian utility. International
Economic Review 5 (1964), 285-293.
DeGroot, M. Optimal Statistical Decisions. New York: McGraw-Hill,
Diamond, P. A. The evaluation of infinite utility streams.
Econometrica 33 (1965), 170-177.
Domar, E. D. and Musgrave, R. A. Proportional income taxation and
risk-taking. Quarterly Journal of Economics 58 (1944), 389-422.
Dyer, J. S. Interactive goal programming. Management Science 19
(1972), 62-70.
Dyer, J. S. A time-sharing computer program for the solution of the
multiple criteria problem. Management Science 19 (1973), 1379-
Dyer, J. S. and Sarin, R. K. An axiomatization of cardinal additive
conjoint measurement theory. University of California, Los
Angeles, 1977.
Eckenrode, R. T. Weighting multiple criteria. Management Science 12
(1965), 180-192.
Edwards, W. The theory of decision making. Psychological Bulletin 51
(1954), 380-417.
Edwards, W. Social utilities. Engineering Economist (1972), 119-129.
Einhorn, H. J. The use of nonlinear, noncompensatory models in
decision making. Psychological Bulletin 73 (1970), 221-230.

Einhorn, H. J. and Hogarth, R. M. Unit weighting schemes for decision

making. Organizational Behavior and Human Performance 13 (1975),
Encarnacion, J., Jr. A note on lexicographical preferences.
Econometrica 32 (1964a), 215-217.
Encarnacion, J., Jr. Constraints and the firm's utility function.
Review of Economic Studies 31 (1964b), 113-120.
Farquhar, P. H. A fractional hypercube decomposition theorem for
multiattribute utility functions. Operations Research 23 (1975),
Farquhar, P. H. Pyramid and semicube decompositions of multiattribute
utility functions. Operations Research 24 (1976), 256-271.
Farquhar, P. H. A survey of multiattribute utility theory and
applications. TIMS Studies in the Management Sciences 6 (1977),
Feldstein, M. S. Mean-variance analysis in the theory of liquidity
preference and portfolio selection. Review of Economic Studies
36 (1969), 5-12.
Ferguson, C. E. The theory of multidimensional utility analysis in
relation to multiple-goal business behavior: a synthesis.
Southern Economic Journal 32 (1965) 169-175.
Fischer, G. W. Convergent validation of decomposed multi-attribute
utility assessment procedures for risky and riskless decisions.
Organizational Behavior and Human Performance 18 (1977), 295-315.
Fishburn, P. C. Decision and Value Theory. New York: Wiley, 1964.
Fishburn, P. C. Independence in utility theory with whole product
sets. Operations Research 13 (1965a), 28-45.
Fishburn, P. C. Markovian dependence in utility theory with whole
product sets. Operations Research 13 (1965b), 238-257.
Fishburn, P. C. Analysis of decisions with incomplete knowledge of
probabilities. Operations Research 13 (1965c), 217-237.
Fishburn, P. C. Decision under uncertainty: an introductory
exposition. Journal of Industrial Engineering 17 (1966), 341-353.
Fishburn, P. C. Methods ~f estimating additive utilities. Management
Science 13 (1967), 435-453.
Fishburn, P. C. Utility Theory for Decision Making. New York:
Wiley, 1970a.
Fishburn, P. C. Intransitive indifference in preference theory: a
survey. Operations Research 18 (1970b), 207-228.
Fishburn, P. C. Intransitive indifference with unequal indifference
intervals. Journal of Mathematical Psychology 7 (1970c), 144-149.
Fishburn, P. C. Utility theory with inexact preferences and degrees of
preference. Synthese 21 (1970d), 204-221.
Fishburn, P. C. Additive representations of real-valued functions on
subsets of product sets. Journal of Mathematical Psychology 8
(1971), 382-388.
Fishburn, P. C. Interdependent preferences on finite sets. Journal of
Mathematical Psychology 9 (1972), 225-236.
Fishburn, P. C. The Theory of Social Choice. Princeton, N. J.:
Princeton University Press, 1973a.

Fishburn, P. C. Binary choice probabilities: on the varieties of

stochastic transitivity. Journal of Mathematical Psychology 10
(1973b), 327-352.
Fishburn, P. C. Interval representations for interval orders and
semiorders. Journal of Mathematical Psychology 10 (1973c),
Fishburn, P. C. Bernoullian utilities for multiple-factor situations.
In Cochrane, J. L. and Zeleny, M. (eds.), Multiple Criteria
Decision Making. Columbia, South Carolina: University of South
Carolina Press, 1973d, 47-61.
Fishburn, P. C. A mixture-set axiomatization of conditional subjective
expected utility. Econometrica 41 (1973e), 1-25.
Fishburn, P. C. Lexicographic orders, utilities and decision rules:
a survey. Management Science 20 (1974a), 1442-1471.
Fishburn, P. C. von Neumann-Morgenstern utility functions on two
attributes. Operations Research 22 (1974b), 35-45.
Fishburn, P. C. Convex stochastic dominance with continuous
distribution functions. Journal of Economic Theory 7 (1974c),
Fishburn, P. C. Convex stochastic dominance with finite consequence
sets. Theory and Decision 5 (1974d), 119-137.
Fishburn, P. C. Axioms for lexicographic preferences. Review of
Economic Studies 42 (1975a), 415-419.
Fishburn, P. C. Unbounded expected utility. Annals of Statistics
3 (1975b), 884-896.
Fishburn, P. C. Separation theorems and expected utilities. Journal of
Economic Theory 11 (1975c), 16-34.
Fishburn, P. C. Stochastic dominance: theory and applications. In
White, D. J. and Bowen, K. C. (eds.), The Role and Effectiveness
of Theories of Decision in Practice. London: Hodder and
Stoughton, 1975d.
Fishburn, P. C. On the foundations of mean-variance analysis.
Mimeographed, The Pennsylvania State University, 1975e.
Fishburn, P. C. Noncompensatory preferences. Synthese 33 (1976a),
Fishburn, P. C. Utility independence on subsets of product sets.
Operations Research 24 (1976b), 245-255.
Fishburn, P. C. Continua of stochastic dominance relations for
bounded probability distributions. Journal of Mathematical
Economics 3 (1976c), 295-311.
Fishburn, P. C. Multiattribute utilities in expected utility theory.
In Bell, D. E., Keeney, R. L. and Raiffa, H. (eds.), Conflicting
Objectives. New York: Wiley, 1977a.
Fishburn, P. C. Expected utility theories: a review note. In Henn, R.
and Moeschlin, O. (eds.), M~a~t~h~e~m~a~t~i~c~a=l~E~c~o~n~o~m~i~c~s~a~n~d~G~aTm~e~T~h~e~o~r~y~:
Essays in Honor of Oskar Morgenstern. Berlin and New York:
Springer-Verlag, 1977b, 197-207.
Fishburn, P. C. Approximations of two-attribute utility functions.
Mathematics of Operations Research 2 (1977c), 30-44.
Fishburn, P. C. Stochastic dominance without transitive preferences.
The Pennsylvania State University, 1977d.

Fishburn, P. C. Mean-risk analysis with risk associated with below-

target returns. American Economic Review 67 (1977e), 116-126.
Fishburn, P. C. Ordinal preferences and uncertain lifetimes.
Econometrica 46 (1978).
Fishburn, P. C. and Keeney, R. L. Seven independence concepts and
continuous multiattribute utility functions. Journal of
Mathematical Psychology 11 (1974), 294-327.
Fishburn, P. C. and Keeney, R. L. Generalized utility independence and
some implications. Operations Research 23 (1975), 928-940.
Fishburn, P. C., Murphy, A. H. and Isaacs, H. H. Sensitivity of
decisions to probability estimation errors: a reexamination.
Operations Research 16 (1968), 254-267.
Fishburn, P. C. and Vickson, R. G. Theoretical foundations of
stochastic dominance. In Whitmore, G. A. and Findlay, M. C.
(eds.), Stochastic Dominance: An Approach to Decision Making
Under Risk. Lexington, Mass.: D. C. Heath and Co., 1977.
Frisch, R. Sur une probleme d'economie pure. Norsk Mathematish
Forenings Skrifter 1 (1926), 1-40.
Galanter, E. The direct measurement of utility and subjective
probability. American Journal of Psychology 75 (1962), 208-220.
Geoffrion, A. M. Proper efficiency and the theory of vector
maximization. Journal of Mathematical Analysis and Applications
22 (1968), 618-630.
Geoffrion, A. Vector maximal decomposition programming. Working
paper No. 164, University of California, Los Angeles, 1970.
Geoffrion, A. M., Dyer, J. S. and Feinberg, A. An interactive approach
for multi-criterion optimization, with an application to the
operation of an academic department. Management Science 19 (1972),
Georgescu-Roegen, N. Choice, expectations and measurability.
Quarterly Journal of Economics 68 (1954), 503-534.
Georgescu-Roegen, N. Utility. International Encyclopedia of the
Social Sciences 16 (1968), 236-267.
Goldberg, L. R. Man versus model of man: just how conflicting is
that evidence? Organizational Behavior and Human Performance
16 (1976), 13-22.
Goodman, L. A. On methods of amalgamation. In Thrall, R. M.,
Coombs, C. H. and Davis, R. L. (eds.), Decision Processes.
New York: Wiley, 1954, 39-48.
Gorman, W. M. Symposium on Aggregation: the structure of utility
functions. Review of Economic Studies 35 (1968), 367-390.
Green, P. E. and Wind, Y. Multiattribute Decisions in Marketing.
Hinsdale, Illinois: Dryden Press, 1973.
Gulliksen, H. Measurement of subjective values. Psychometrika 21
(1956), 229-244.
Hadar, J. and Russell, W. R. Rules for ordering uncertain prospects.
American Economic Review 59 (1969), 25-34.
Hadar, J. and Russell, W. R. Stochastic dominance and diversification.
Journal of Economic Theory 3 (1971), 288-305.
Hanoch, G. and Levy, H. The efficiency analysis of choices involving
risk. Review of Economic Studies 36 (1969), 335-346.

Hardy, G. E., Littlewood, J. E. and Polya, G. Inequalities.

Cambridge, England: Cambridge University Press, 1934. (Second
edition, 1967)
Hausdorff, F. Set Theory. Chelsea, New York, 1957. (Translated from
the third (1937) German edition of Mengenlehre.)
Hausner, M. Multidimensional utilities. In Thrall, R. M., Coombs,
C. H. and Davis, R. L. (eds.), Decision Processes. New York:
Wiley, 1954, 167-180.
Herstein, I. N. and Milnor, J. An axiomatic approach to measurable
utility. Econometrica 21 (1953), 291-297.
Hirsch, G. Logical foundations, analysis and development of mu1ti-
criterion methods. Ph.D. dissertation, University of
Pennsylvania, 1976.
Hogan, W. W. and Warren, J. M. Computation of the efficient boundary
in the E-S portfolio selection model. Journal of Financial and
Quantitative Analysis 7 (1972), 1881-1896.
Hogan, W. W. and Warren, J. M. Toward the development of an
equilibrium capital-market model based on semi variance. Journal
of Financial and Quantitative Analysis 9 (1974), 1-11.
Hogarth, R. M. (with comments by Winkler, R. L. and Edwards, W.)
Cognitive processes and the assessment of subjective probability
distributions. Journal of the American Statistical Association
70 (1975), 271-294.
Houthakker, H. S. Revealed preference and the utility function.
Economica 17 (1950), 159-174.
Houthakker, H. S. The present state of consumption theory.
Econometrica 29 (1961), 704-740.
Huber, G. P. Methods for quantifying subjective probabilities and
multiattribute utilities. Decision Sciences 5 (1974), 430-458.
Jaffray, J.-Y. On the extension of additive utilities to infinite sets.
Journal of Mathematical Psychology 11 (1974), 431-452.
Jeffrey, R. C. The Logic of Decision. New York: McGraw-Hill, 1965.
Johnsen, E. Studies in Multiobjective Decision Models. Lund,
Sweden: Student-litteratur, 1968.
Johnson, E. A. and Huber, G. P. The technology of utility assess-
ment. University of Wisconsin, 1976.
Joy, O. M. and Barron, F. H. Behavioral risk constraints in capital
budgeting. Journal of Financial and Quantitative Analysis 9
(1974), 763.
Kahneman, D. and Tversky, A. Subjective probability: a judgment of
representativeness. Cognitive Psychology 3 (1972), 430-454.
Kahneman, D. and Tversky, A. On the psychology of prediction.
Psychological Review 80 (1973), 237-251.
Kannai, Y. Existence of a utility in infinite dimensional partially
ordered spaces. Israel Journal of Mathematics 1 (1963), 229-234.
Keeney, R. L. Quasi-separable utility functions. Naval Research
Logistics Quarterly 15 (1968), 551-565.
Keeney, R. L. Utility independence and preferences for multiattributed
consequences. Operations Research 19 (1971), 875-893.
Keeney, R. L. Utility functions for multiattributed consequences.
Management Science 18 (1972), 276-287.

Keeney, R. L. Risk independence and multiattributed utility functions.

Econometrica 4~ (1973), 27-34.
Keeney, R. L. and Raiffa, H. Decisions with Multiple Objectives:
Preferences and Value Tradeoffs. New York: Wiley, 1976.
Kirkwood, Craig W. Parametrically dependent preferences for multi-
attributed consequences. Operations Research 24 (1976), 92-103.
Kneppreth, N. P., Gustafson, D., ~eife~, R. P. and Johnson, E. M.
Techniques for the assessment of worth. U. S. Army Research
Institute for the Behavioral and Social Sciences, 1974.
Koopmans, T. C. Stationary ordinal utility and impatience.
Econometrica 28 (1960), 287-309.
Koopmans, T. C. Representation of preference orderings with independent
components df consumption. In McGuire, C. B. and Radner, R.
(eds.), Decision and Organization. North-Holland Publishing Co.,
1972a, 57-78.
Koopmans, T. C. Representation of preference orderings over time. In
McGuire, C. B. and Radner, R. (eds.), Decision and Organization.
North-Holland Publishing Co., 1972b, 79-100.
Koopmans, T. C., Diamond, P. A. and Williamson, R. E. Stationary
utility and time perspective. Econometrica 32 (1964), 82-100.
Kornbluth, J. S. H. A survey of goal programming. OMEGA 1 (1973),
Krantz, D. H. Conjoint measurement: the Luce-Tukey axiomatization
and some extensions. Journal of Mathematical Psychology 1 (1964),
Krantz, D. H., Luce, R. D., Suppes, P. and Tversky, A. Foundations of
Measurement. New York: Academic Press, 1971.
Lancaster, K. J. A new approach to consumer theory. Journal of
Political Economy 14 (1966), 132-157.
Lancaster, K. J. Consumer Demand: A New Approach. New York:
Columbia University Press, 1971.
Lancaster, K. The theory of household behavior: some foundations.
Annals of Economic and Social Measurement 4 (1975), 5-21.
Laplace, Pierre Simon de. Essai philosophique sur les probabilites.
Paris, 1814; A Philosophical Essay on Probabilities (translation).
New York: Dover Publications, 1951.
tee, s. M. Goal Programming for Decision Analysis. Philadelphia:
Auerbach Publishers, 1972.
Lehmann, E. Ordered families of distributions. Annals of Mathematical
Statistics 26 (1955), 399~419.
Levhari, D., Paroush, J. and Peleg, B. Efficiency analysis for multi-
variate distributions. Review of Economic Studies 42 (1975),
Levy, H. The rationale of the mean-standard deviation analysis:
comment. American Economic Review 64 (1974), 434-441.
Levy, H. and Paroush, J. Toward multivariate efficiency criteria.
Journal of Economic Theory 7 (1974a), 129-142.
Levy, H. and Paroush, J. Multi-period stochastic dominance.
Management Science 21 (1974b), 428-435.
Libby, R. Man versus model of man: some conflicting evidence. Organi-
zational Behavior and Human Performance 16 (1976a), 1-12.

Libby, R. Man versus model of man: the need for a nonlinear model.
Organizational Behavior and Human Performance 16 (1976b), 23-26.
Libby, R. and Fishburn, P. C. Behavioral models of risk taking in
capital budgeting. Journal of Accounting Research (1977).
Lichtenstein, S. Bases for preferences among three-outcome bets.
Journal of Experimental Psychology 67 (1965), 162-169.
Lichtenstein, S. and Slovic, P. Reversals of preference between bids
and choices in gambling decisions. Journal of Experimental
Psychology 89 (1971), 46-55.
Lintner, J. The valuation of risk assets and the selection of risky
investments in stock portfolios and capital budgets. Review of
Economics and Statistics 47 (1965), 13-37.
Lorentz, G. G. Approximation of Functions. New York: Holt, Rinehart
and Winston, 1966.
Luce, R. D. Semiorders and a theory of utility discrimination.
Econometrica 24 (1956), 178-191.
Luce, R. D. A probabilistic theory of utility. Econometrica 26
(1958), 193-224.
Luce, R. D. Individual Choice Behavior. New York: Wiley, 1959.
Luce, R. D. Two extensions of conjoint measurement. Journal of
Mathematical Psychology 3 (1966),348-370.
Luce, R. D. Jhree axiom systems for additive semiordered structures.
SIAM Journal on Applied Mathematics 25 (1973), 41-53.
Luce, R. D. Lexicographic tradeoff structures. Mimeographed, Harvard
University, 1977.
Luce, R. D. and Krantz, D. H. Conditional expected utility.
Econometrica 39 (1971), 253-271.
Luce, R. D. and Raiffa, H. Games and Decisions. New York: Wiley,
Luce, R. D. and Suppes, P. Preference, utility and subjective prob-
ability. In Luce, R. D., Bush, R. R. and Galanter, E. (eds.),
Handbook of Mathematical Psychology, Vol. III. New York:
Wiley, 1965, 249-410.
Luce, R. D. and Tukey, J. W. Simultaneous conjoint measuremellt: a
new type of fundamental measurement. Journal of Mathematical
Psychology 1 (1964), 1-27.
MacCrimmon, K. R. An overview of multiple objectiv~ decision making.
In Cochrane, J. L. and Zeleny, M. (eds.), Multiple Criteria
Decision Making. Columbia, South Carolina: University of South
Carolina Press, 1973, 18-44.
MacCrimmon, K. R. and. Siu, J. K. Making trade-offs. Decision Sciences
5 (1974), 680-704.
MacCrimmon, K. R. and Toda, M. The experimental det'ermination of in-
difference curves. Review of Economic Studies 36 (1969), 433-451.
Machol, R. E. and Lerner, E. M. Risk, ruin, and investment analysis.
Journal of Financial and Quantitative Analysis 4 (1969), 473-492.
Mao, J. C. T. Survey' of capital budgeting: theory and practice.
Journal of Finance 25 (1970a), 349-360.
Mao, J. C. T. Models of capital budgeting, E-V vs E-S. Journal of
Financial and Quantitative Analysis 4 (1970b), 657-675.

Markowitz, H. Portfolio selection. Journal of Finance 7 (1952), 77-

Markowitz, H. Portfolio Selection. New York: Wiley, 1959.
Marley, A. A. J. Some probabilistic models of simple choice and
ranking. Journal of Mathematical Psychology 5 (1968), 311-332.
Marschak, J. Rational behavior, uncertain prospects, and measurable
utility. Econometrica 18 (1950), 111-141.
Marschak, J. Binary-choice constraints and random utility indicators.
Mathematical Methods in the Social Sciences 1959. Stanford,
California: Stanford University Press, 1960, 312-329.
May, K. O. Intransitivity, utility, and the aggregation of preference
patterns. Econometrica 22 (1954), 1-13.
Meyer, J. Choice among distributions. Journal of Economic Theory 14
(1977), 326-336.
Meyer, R. F. On the relationship among the utility of assets, the
utility of consumption, and investment strategy in an uncertain,
but time-invariant, world. In Lawrence, J. (ed.), Proceedings of
the Fifth International Conference on Operational Research.
London: Tavistock, 1970, 627-648.
Meyer, R. F. Preferences over time. In Bell, D. E., Keeney, R. L. and
Raiffa, H. (eds.), Conflicting Objectives. New York: Wiley, 1977.
Meyer, R. F. and Pratt, J. W. The consistent assessment and fairing
of preference functions. IEEE Transactions on Systems Science and
Cybernetics ssc-4 (1968), 270-278.
Milnor, J. Games against nature. In Thrall, R. M., Coombs, C. H.
and Davis, R. L. (eds.), Decision Processes. New York: Wiley,
1954, 49-59.
Mirkin, B. G. Description of some relations on the set of real-line
intervals. Journal of Mathematical Psychology 9 (1972), 243-252.
Morrison, H. W. Intransitivity of paired comparison choices. Ph.D.
dissertation, The University of Michigan, 1962.
Narens, L. Minimal conditions for additive conjoint measurement and
qualitative probability. Journal of Mathematical Psychology 11
(1974), 404-430.
Narens, L. and Luce, R. D. The algebra of measurement. Journal of
Pure and Applied Algebra 8 (1976), 197-233.
Payne, J. W. Alternative approaches to decision making under risk:
moments versus risk dimensions. Psychological Bulletin 80
(1973), 439-453.
Payne, J. W. to preferences among gambles.
Human Perce tion and

Pekelman, D. and Sen, S. Mathemat.ical programming models for the

determination of attribute weights. Management Science 20
(1974), 1217-1229.
Peleg, B. Efficient random variables. Journal of Mathematical
Economics 2 (1975), 243-252.
Pfanzagl, J. A general theory of measurement: applications to utility.
Naval Research Logistics Quarterly 6 (1959), 283-294.
Philip, J. Algorithms for the vector maximization problem. Mathe-
matical Programming 2 (1972), 207-229.

Plott, C. R., Little, J. T. and Parks, R. P. Individual choice when

objects have "ordinal" properties. Review of Economic Studies
42 (1975), 403-413.
Pollak, R. A. Additive von Neumann-Morgenstern utility functions.
Econometrica 35 (1967), 485-494.
Pollak, R. A. The risk independence axiom. Econometrica 41 (1973),
Pollatsek, A. and Tversky, A. A theory of risk. Journal of Mathe-
matical Psychology 7 (1970), 540-553.
Porter, R. B. Semivariance and stochastic dominance: a comparison.
American Economic Review 64 (1974), 200-204.
Pratt, J. W. Risk aversion in the small and in the large.
Econometrica 32 (1964), 122-136.
Pratt, J. W., Raiffa, H. and Schlaifer, R. The foundations of
decision under uncertainty: an elementary exposition. Journal of
the American Statistical Association 59 (1964), 353-375.
Pruitt, D. G. Pattern and level of risk in gambling decisions.
Psychological Review 69 (1962), 187-201.
Quandt, R. E. A probabilistic theory of consumer behavior. Quarterly
Journal of Economics 70 (1956), 507-536.
Quirk, J. P. and Saposnik, R. Admissibility and measurable utility
functions. Review of Economic Studies 29 (1962), 140-146.
Raiffa, H. Preferences for multi-attributed alternatives. Memorandum
RM-5868-DOT. Santa Monica, California: The Rand Corporation,
Ramsey, F. P. Truth and probability. In F. P. Ramsey, The Foundations
of Mathematics and Other Logical Essays. New York: Harcourt,
Brace and Co., 1931. Reprinted in Kyburg, H. E. and SmokIer,
H. E. (eds.), Studies in Subjective Probability. New York:
Wiley, 1964, 61-92.
Rapoport, A. and Wallsten, T. S. Individual decision behavior. Annual
Review of Psychology 23 (1972), 131-176.
Reder, M.. W. A reconsideration of the marginal productivity theory.
Journal of Political Economy 55 (1947), 450-458.
Richter, M. K. Revealed preference theory. Econometrica 34 (1966),
Roberts, F. S. On nontransitive indifference. Journal of Mathematical
Psychology 7 (1970), 243-258.
Roberts, F. S. Homogeneous families of semiorders and the theory of
probabilistic consistency. Journal of Mathematical Psychology 8
(1971), 248-263.
Roskies, R. A measurement axiomatization for an essentially multi-
plicative representation of two factors. Journal of Mathematieal
Psychology 2 (1965), 266-276.
Rothschild, M. and Stiglitz, J. E. Increasing risk: I. A definition.
Journal of Economic Theory 2 (1970), 225-243.
Roy, A. D. Safety first and the holding of assets. Econometrica 20
(1952), 431-449.
Roy, B. Problems and methods with multiple objective functions.
Mathematical Programming 1 (1971), 239-266.

Roy, B. How outranking relation helps multiple criteria decision

making. In Cochrane, J. L. and Zeleny, M. (eds.), Multiple
Criteria Decision Making. Columbia, South Carolina: University
of South Carolina Press, 1973, 179-201.
Roy, B. Criteres multiples et modelisation des preferences (L'apport
des relations de surclassement). Revue D'Economie Politique 84
(1974), 1-44.
Samuelson, P. A. A note on the pure theory of consumer's behaviour.
Economica 5 (1938), 61-71.
Samuelson, P. A. Consumption theory in terms of revealed preference.
Economica 15 (1948), 243-253.
Samuelson, P. A. General proof that diversification pays. Journal of
Financial and Quantitative Analysis 2 (1967), 1-13.
Samuelson, P. A. The fundamental approximation theorem of portfolio
analysis in terms of means, variances and higher moments. The
Review of Economic Studies 37 (1970), 537-542.
Samuelson, P. A. and Merton, R. C. Generalized mean-variance trade-
offs for best perturbation corrections to approximate portfolio
decisions. The Journal of Finance 29 (1974), 27-40.
Saska, J. Linearni multiprogramorani. Ekonomicko-Matematicky Obzor
4 (1968), 359-373.
Savage, L. J. The theory of statistical decision. Journal of the
American Statistical Association 46 (1951), 55-67.
Savage, L. J. The Foundations of Statistics. New York: Wiley, 1954;
New York: Dover Publications, 1972.
Savage, L. J. Elicitation of personal probabilities and expectations.
Journal of the American Statistical Association 66 (1971), 783-
Sayeki, Y. Allocation of importance: an axiom system. Journal of
Mathematical Psychology 9 (1972), 55-65.
Sayeki, Y. and Vesper, K. H. Allocation of importance in a hierarchical
goal structure. Management Science 19 (1973), 667-675.
Schwartz, T. Rationality and the myth of the maximum. Nous 6 (1972),
Scott, D. Measurement structures and linear inequalities. Journal of
Mathematical Psychology 1 (1964), 233-247.
Scott, D. and Suppes, P. Foundational aspects of theories of measure-
ment. Journal of Symbolic Logic 23 (1958), 113-128.
Sen, A. K. Collective Choice and Social Welfare. San Francisco:
Holden-Day, 1970.
Shafer, W. J. Preference relations for rational demand functions.
Journal of Economic Theory 11 (1975), 44 b -455.
Sharpe, W. F. Capital asset prices: a theory of market equilibrium
under conditions of risk. Journal of Finance 19 (1964), 425-442.
Sherman, S. On a theorem of Hardy, Littlewood, Polya, and Blackwell.
Proceedings of the National Academy of Sciences 37 (1951), 826-831.
Shubik, M. Objective functions and models of corporate optimization.
Quarterly Journal of Economics 75 (1961), 345-375.
Siegel, S. A method for obtaining an ordered metric scale.
Psychometrika 21 (1956), 207-216.

Simon, H. A. A behavioral model of rational choice. Quarterly

Journal of Economics 69 (1955), 99-118.
Slovic, P. The relative influence of probabilities and payoffs upon
perceived risk of a gamble. Psychonomic Science 9 (1967), 223-
Slovic, P., Fischhoff, B. and Lichtenstein, S. Behavioral decision
theory. Annual Review of Psychology (1977).
Slovic, P. and Lichtenstein, S. Relative importance of probabilities
and payoffs in risk taking. Journal of Experimental Psychology
78 (1968), 1-18.
Slovic, P. and Lichtenstein, S. Comparison of Bayesian and regression
approaches to the study of information processing in judgment.
Organizational Behavior and Human Performance 6 (1971), 649-744.
Smith, L. H., Lawless, R. W. and Shenoy, B. Evaluating multiple
criteria--models for two-criteria situations. Decision Sciences
5 (1974), 587-596.
Srinivasan, V. and Shocker, A. D. Linear programming techniques for
multidimensional analysis of preferences. Psychometrika 38
(1973a), 337-369.
Srinivasan, V. and Shocker, A. D. Estimating the weights for multiple
attributes in a composite criterion using pairwise Judgments.
Psychometrika 38 (1973b), 473-493.
Srinivasan, V., Shocker, A. D. and Weinstein, A. G. Measurement of
a composite criterion of managerial success. Organizational
Behavior and Human Performance 9 (1973), 147-167.
Stimson, D. H. Utility measurement in public health decision making.
Management Science 16 (1969), B17-B30.
Stone, B. K. A general class of three-parameter risk measures.
Journal of Finance 28 (1973), 675-685.
Strassen, V. The existence of probability measures with given
marginals. Annals of Mathematical Statistics 36 (1965), 423-439.
Suppes, P. The role of subjective probability and utility in decision-
making. Proceedings of the Third Berkeley Sym~OSiUm on
Mathematical Statistics and Probability 5 (195 ), 61-73.
Suppes, P. and Winet, M. An axiomatization of utility based on the
notion of utility differences. Management Science 1 (1955),
Suppes, P. and Zinnes, J. L. Basic measurement theory. In Luce,
R. D., Bush, R. R..and Galanter, E. (eds.), Handbook of
Mathematical Psychology, Vol. I. New York: Wiley, 1963, 1-76.
Tell, B. A Comparative Study of Some Multiple-Criteria Methods.
Stockholm: Economic Research Institute, Stockholm School of
Economics, 1976.
Tesfatsion, L. Stochastic dominance and the maximization of expected
utility. Review of Economic Studies 43 (1976), 301-315.
Thurstone, L. L. The indifference function. Journal of Social
Psychology 2 (1931), 139-167.
Tobin, J. The theory of portfolio selection. In Hahn, F. H. and
Brechling, F. P. R. (eds.), The Theory of Interest Rates. London:
Macmillan, 1965, 3-51.

Tsiang, S. C. The rationale of the mean-standard deviation analysis,

skewness preference, and the demand for money. American Economic
Review 62 (1972), 354-371.
Tsiang, S. C. The rationale of the mean-standard deviation analysis:
reply and errata for original article. American Economic Review
64 (1974), 442-450.
Tversky, A. Finite additive structures. Mathematical Psychology
Program, University of Michigan, 1964.
Tversky, A. Intransitivity of preferences. Psychological Review
76 (1969), 31-48.
Tversky, A. Elimination by aspects: a theory of choice. Psychol-
ogical Review 79 (1972a), 281-299.
Tversky, A. Choice by elimination. Journal of Mathematical Psychology
9 (1972b), 341-367.
Tversky, A. and Kahneman, D. Availability: a heuristic for judging
frequency and probability. Cognitive Psychology 5 (1973),
Tversky, A. and Kahneman, D. Judgment under uncertainty: heuristics
and biases. Science 185 (1974), 1124-1131.
Van Moeseke, P. Stochastic linear programming. Yale Economic Essays
5 (1965), 196-254.
Van Moeseke, P. Towards a theory of efficiency. In Quirk, J. and
Zarley, A. M. (eds.), Papers in Quantitative Economics.
St. Louis, Missouri: University of Kansas Press, 1968, 1-30.
Vickson, R. G. Stochastic dominance for decreasing absolute risk
aversion. Journal of Financial and Quantitative Analysis 10
(1975a), 799-811.
Vickson, R. G. Stochastic dominance tests for decreasing absolute
risk aversion. I. Discrete random variables. Management Science
21 (1975b), 1438-1446.
Vickson, R. G. Stochastic orderings from partially known utility
functions. University of Waterloo, 1976.
Vickson, R. G. Stochastic dominance tests for decreasing absolute
risk-aversion. II. General random variables. Management Science
23 (1977), 478-489.
von Neumann, J. and Morgenstern, O. Theory of Games and Economic
Behavior. Princeton, New Jersey: Princeton University Press,
Wainer, H. Estimating coefficients in linear models: it don't make
no nevermind. Psychological Bulletin 83 (1976), 213-217.
Wald, A. Statistical Decision Functions. New York: Wiley, 1950.
Weinstein, A. A. Individual preference intransitivity. Southern
Economic Journal 34 (1968), 335-343.
Whitmore, G. A. Third-degree stochastic dominance. American Economic
Review 60 (1970), 457-459.
Whitmore, G. A. and Findlay, M. C. (eds.) Stochastic Dominance: An
Approach to Decision making under Risk. Lexington, Massachusetts:
D. C. Heath and Co., 1977.
Wilkie, W. L. and Pessemier, E. A. Issues in marketing's use of
multi-attribute attitude models. Journal of Marketing Research
10 (1973), 428-441.

Williams, A. C. and Nassar, J. I. Financial measurement of capital

investments. Management Science 12 (1966), 851-864.
Winkler, R. L. The assessment of prior distributions in Bayesian
analysis. Journal of the American Statistical Association 62
(1967a), 776-800.
Winkler, R. L. The quantification of judgment: some methodological
suggestions. Journal of the American Statistical Association 62
(1967b), 1105-1120.
Winkler, R. L. and Murphy, A. H. Experiments in the laboratory and
the real world. Organizational Behavior and Human Performance
10 (1973), 252-270.
Yu, P. L. Cone ~onvexity, cone extreme points, and nondominated
solutions in decision problems with multiobjectives. Journal of
Optimization Theory and Applications 14 (1974), 319-377.
Yu, P. L. and Zeleny, M. The set of all nondominated solutions in
linear cases and a multicriteria simplex method. Journal of
Mathematical Analysis and Applications 49 (1975), 430-468.
Yu, P,. L. and Zeleny, M. Linear multiparametric programming by multi-
criteria simplex method. Management Science 23 (1976), 159-170.
Zeleny, M. Linear Multiobjective Programming. New York: Springer-
Verlag, 1974.
Zeleny, M. MCDM bibliography - 1975. In Zeleny, M. (ed.), Multiple
Criteria Decision Making: Kyoto 1975. New York: Springer-
Verlag, 1976a, 291-321.
Zeleny, M. The attribute-dynamic attitude model (ADAM). Management
Science 23 (1976b), 12-26.
Zionts, S. and Wallenius, J. An interactive programming method for
solving the multiple criteria problem. Management Science
22 (1976), 652-663.

T. Gal *

*University of Aachen, Dept. of Economics, 51 Aachen, W.-Germany

At the Institute of Economics at the University of Aachen we have an

OR-Group, some members of which deal also with MCP. In this paper
some of the recent results of this group are presented briefly. These
results are partially published already or they are available in form
of working papers.
Hence, this paper is not an overview in the usual sense. The sections
are - starting with the second one - entitled in correspondence with
the references [1] - [6]:

1. Some Notation
2. A Method for Determining the Set of all Efficient Solutions
to a Linear· Vectormaximum Problem (LVMP) [ 1]
3. Nonessential Objective Functions in an LVMP [2]
4. Postefficient Analys~s [3]
5. Fuzzy Programming and MCP [ 4]
6. How to Deal with Degeneracy [ 5]
7. Generalized Saddle Point Theory of Mep [6 ]

Ax ~ b 1:
j =1
a .. x.
~J J
lO bi' i = 1, ... , m
(1) <-)
x 0: 0 x.
0: 0, j = 1, ... , n

(2) X = {x E Rn Ax ~ b, x ~ O} '# 0

Slacks: Xu = (x n + 1 ' ... , xn +m) T , xn+i ~ 0 for all i.


y - (X) and A = (A I I), I - identity matrix.

- Xu

Using this, from (1) follows

Ay =b L: a .. x. + x . = b 1• all i
j =1 lJ J n+l

(3) <->

y ;;; 0 Xj ;;; 0, all j, xn+i ;;; 0 all i


( 4) x= {y E IR n +m I Ay = b, Y ;;; O}

(From X i 0 follows X i 0)

Rearrange the columns of A such that A' = (Bs I Ns ) where

-j 1 -jm -ji -
Bs = (a , ... , a ), a columns of A. The matrix Bs is uniquely
characterized by the basis-index



and rearrange accordingly as:

( SbYN) E X is a feasible solution to (3).

Set YN = 0; then

(6 ) ( SbO ) E X- is a complete basic feasible solution to (3).

The vector sb may consist of some Xj and/or some xn+i. Denote

a part of x~o) E IRm+n such that x S consists of the values of all

variables Xj related to Ps (or: to Bs ). This implies that x S is an

extreme point (EP) of X and that

< -- >

Let c k E ~m+n such that c~ =0 for all j = n+l, ... , n+m, and for
each k E {1, ... , K}, and let

1 K
(8) z(y) (c, ... , c ) ,


(9 ) Z(y)

Regarding x s <-> xs(0) ,to each zk (y) the original zk (x ) = ( c .k)l' x,

c· k Elftm, is assigned, and vice versa.

The linear vectormaximum problem (LVMP) is to

(10) "max" Z(y) <-> "max" s.t. Ay b, Y ;;: o.


The set

(11) - - t -
E = {y E X l y E X such that CT y ;;: CT-y and CT y # CT-y },
is the set of all efficient solutions to (10).

Theorem 1 (Efficiency theorem): yO E X is an efficient solution to

(10) iff stO > 0 in

(12) max
y E X

such that yO E X is an optimal solution to (12) with respect to

t = to > O. [Proof: Focke 1973, see [1]].



For the sake of simplicity and brevity, no special cases are dealt
with in this paper (unboundedness, degeneracy).

X (ys) = y(y 1 , ... , yS) = { Y E X I Y = 1: asy
s 1: as = 1,
s=l s=l s=l
as ~ 0 for all s}

1 S
be the convex hull of y , ... , y

Definition 1. Let H be a supporting hyperp±ane to X, and F be a

convex polyhedron such that 1 ~ dimF ~ n - 1 and

(1) let xl, .•. , x S be all EP-s of F , and

(2) F = y

Then: F is said to be a face of X iff 3H such that F C H.

Note that from XS <-> x (0) follows H<-> if , F <->

F .

Theorem 2. All Y E F are efficient solutions to (10) iff 3t O > 0

such that yS are for all s = 1, • S optimal solutions to (12)
with respect to t = to > o.
[Proof: Gal 1976, see [1] ].

Definition 2: An undirected graph G = (T, r) is saId to be genera-

ted by the LVMP (10) iff the arc-set r and the node-set T satisfy:

(1) The node Ps E Tiff Bs is an efficient basis,

s s'
(2) Between two nodes Ps' Ps,E T there exists an arc iff y , y
are efficient neighbors,
(3) r(p s ) is the set of all nodes p adjacent to Ps '
(4 ) To every node Ps E T there is assigned at least one
t satisfying Theorem 2.

Denote finally

Z ={ Z(x) E RK I x EX} + Z={ Z(y) E RK y EX}.

A geometrical illustration of Theorems 1 and 2 is in Fig. 1 and

Fig. 2.

Fig. 1.

--- -------- -- -- ------1I

t2 t't" I


Fig. 2.

Consider Fig. 3; here a convex polyhedron in IR 3 is displayed. The

shaded areas are efficient solutions. The method will now be
illustrated via this example.

Fig. 3.

PHASE 1: Assume y 1 , i.e. P1 , is found. Because of y 1 <-> x 1 , we

have xl as well.
1 2 1 5
T(Pl) = tP2' P5}" y(y, y) c E, y(y, Y ).;: E, i.e.
y(x , x 2 ) and y(xl, x 5 ) efficie~t edges of X.

List: = { P2' P5} , Yo = { tl }.

y(y2, y3) e E and

Choose P- E w1 · Then y(P 5 ) = {P 1 , P 4' P6} , implies

y(y 4 , y 5 ) ~ E, y(y 5 , Y6 ) ~ E. List:

V2 = { Pi' P 2 , P5} , W2 = {P 3 , P4' P6} , Y2 = {t~, t~, t~, t~}

Proceeding in this way, the last lists become:
V7 = {Pl' ... , Ps} , W7 = 0, means STOP,

Y7 = {t~, t~, t~, t§, t~, t~, t4' t~, t~, t~, t~}

PAR T 2: Compare tso for y 1 , ... , y 5 . This implies

that y (ys) = FeE 1 -
Compare t! for y3, Y4 , Y7 , YS , implies
that y(y 3 , y 4 , y 7 , ys) = F2 ~ E and finally compare t~ for y5, y 6 ,
implies (y5, y6) = edge ~ E.

In Fig. 4 the graph is illustrated.

P3(tO.t 1 )

Fig. 4.


Definition 3 : Let C' = { c 1 c.K } . The convex cone

K k k E C' ,
K(C} = { c E Rn I c = g akc c ak ~ 0 }
is said to be the convex cone spanned by C:

Definition 4: C" c c' is called a minimal spanning system of

K(C') iff

(1) K(C") = K(C'), and

(2) For each C"'c C" follows K(C"') C K(C") .

Theorem 3: Let C'={ c 1 , ... , cK} be the set of all columns of C

and Ie t C ~' = { 1
c, ... ,c U } ,U ~ K , be a mlnlmal
. . .
spannlng system
of K(C'). Denote Z"= {zj' zj'(x) = (cj)Tx , c j E C"} .

E(Z) = E(Z")

(where Z is the original set of the Zj - s ) .

[Proof: Leberling 1976, see [2] ].

Definition 5. If U < K, then any Zj e z" is called nonessential.

Theorem 4. E { 1, . .. , K }, fixed, is nonessential iff

(1) Vx E E(Z) 3t E RK- 1 t > 0, such that x is an optimal

solution to

max 1: tkzk(x) , and
x E X k=l

(2) "Ix E E(Z - {zr}) 3t E RK, t > 0, such that x is an optimal

solution to
max E tkzk(x).
x E X k=1

[Proof: Leberling 1976, see [2]].

In Fig.5 a possible simple case is illustrated. Here

C' c, ... ,c 5, }
= {1 . .
a m1n1mal . t·
spann~ng sys em 1S e.g.

c" = {c 1 , c 5 }. The objective functions z2(x) and z3(x) are absolute

nonessential, z4(x) and z5(x) are relative nonessential. This means
that the former can be deleted both without influencing set E, from
the latter only a single can be deleted. Concelling both would mean
to change set E.

A minimal system of vectors c k which define E as given, is called

minimal cover. Two minimal covers in our example are: C1 = {c 1 ,c 4 },
C2 = {c 1 , c 5} •

K( C I )

z5 c3 c
4 __ --

-+__________________________-L__ ~ c1


Fig. 5.

A method for determining nonessential objective functions consists

of three stages:

STAGE 1: This is a preliminary test that is illustrated in the

following example.

',:1 4 6
c c2 c3 c c5 c c7
-5 1x 1 9 8 4 1
4 1 4 5 5 5 2
-5 1 1 9 8 4 1
9x 0 3 -4 -3 1 1
0 1 2/3 61/9 x 19/3 41/9 14/9 (i)
1 0 1/3 -4/9 -1/3 1/9 1/9
0 9/61 1 57/61 (ii )
1 4/61 0 5/61

In Table (i) of this tableau it is seen that c 3 , c 6 and c 7 can be

respecti~~l' represented as a nonnegative linear combination of c 1
and c 2 . The objective functions belonging to c 3 , c 6 and c 7 are
nonessential and can be deleted.

From Table (ii) of this tableau follows that the objective functions
z2(x) and z5(x) are nonessential and a minimal spanning system is in
thls -
case C = ~,c 1 , c 4 }.

Another example:
c1 c
c3 c

1x -1 2 3
2 -1 1 2
1 -1 2 3
0 1 _3 x -4
1 -1/3 0 1/3
0 -113 1 4/3

From the last Table of this tableau follows:

C 2 = (-1!3)c 1 - (1!3)c 3 , i.e. 3c 2 + c 1 + c 3 = O. In [2] the following
theorem is proved:
If there exists a k* > 0 solving L a c k = 0, then X = E. This is
k-_1 k
the case in our example, hence notl'iing more is to be done.

STAGE 2: Determine E using the above method (see Section 2) that is

modified for finding nonessentail objective functions and
proceed with z. E Z" .

STAGE 3: By comparison determine systems of absolute or relative

nonessential objective functions (no computings needed).


Definition 6: A change of some or all initial dates of the LVMP (10)

is said to be admissible iff graph G generated by (10) does not

Definition 7: Postefficient analysis is to determine a region of ad-

missible changes.

4.1 Postefficient analysis regarding C

The problem is to determine a region ~~ RK of u E RK such that

graph G generated by

"max" Z(y, u)
y E X

remains for all u E N the same. Or: Vu E N: G(u) = G(O).

Definition 8: An EP x S E X, x S E E, is called invariant if all its

neighbors in the usual sense are efficient EP-s. Otherwise it is
called non-invariant.

Let Se.. {s I s = 1, ... , S} such that Vs E S x s is non-invariant,

s {u E RK I x s , s E S fixed, remains non-invariant efficient} .

Theorem 5: The set R C RK of all admissible parameters v is uniquely

determined by

R =n R
sE S S

[Proof Gal, Leberling 1976, see [3] ]

Set R need not to be connected. This follows from the fact that in
determining Rs ' s E S , we are faced with a problem when parameters
appear in the coefficient matrix.

In this case it can be shown that E(Z(v)) = E Vv E R.

In Fig. 6 this case is illustrated. Considering v = (vi' v 2 ) T set R
is as in Fig. 7.

V£ E-.5;1)

Fig. 6.

L _______ _

Fig. 7.

4. 2 Postefficient analysis regarding b

The problem is to determine a region AC RV of A E RV such that graph

G generated by

"max" Z (y, A) CTy

y E X(A)

remains VA E A the same, or: VA E A G() G(O) ,


X(A) = {y E Rm+n Ay b(A), Y ~ 0, b(A) = b + LA }.

Theorem 6: G(A) = G(O) iff A E A = n As' where


[Proof: Gal, Leberling 1976, see [3]].


Corollary 6.1: ·A is an open convex set. [Proof: ibid.]

In this case E(X(X)) i E, though G(X) = G(O).


Consider: min z = cx, s.t. Ax ~ b, x ~ O.

Fuzzy Version: cx ~ z, s.t. Ax ~ b, x ~ 0,

Define: f IRm+ 1 -> [0,1] such that

i f Ax ~ band 2 ~ cx is strongly viola-
i f Ax ~ band 2 ~ cx is satisfied

Introduce: f(ax, cx) = f(Bx) = Min fi «BX)i' x ~ 0, with

for (Bx)i :> b.

~ r:-

f. «Bx).) - bi for b i < (Bx)i :> b i + d i

~ ~ CBXli
d i
0 for (Bx)i > b.J. + d.J.

Where: di subjectively chosen constant

fi«Bx)i) membership function of the i-th row
Min fi«Bx)i) fuzzy decision

After some algebraic arrangements:

Max Min (b' - (Bx)! ) or Max IJ.D(x)
x~O i ~

is equivalent to Max A, s.t. A ~ b i - (Bx)i' i=O,l, ... , m, x ~ 0


IJ.AnB = IJ. A . IJ. B (Simplified Form)


An example: "max" Z(x) =

s.t. -xl + 3x 2 ~ 21
xl + 3x 2 :0 27
4xl + 3x 2 :> 45
3xl + x 2 ~ 30
xl ~O, x2~ 0
(see Fig. 8) •

1 2 3 5 6 7 8 9 10

Fig. 8.

The membership functions are

i f z1(x) ::> -3
z1 (x) + 3
1l 1 (x) if -3 < z1 (x) ::> 14
( °1 17
i f z1(x) > 14

if z2(x) ::> 7

= i f 7 < z2(x) ::> 21

i f z2(x) > 21

The problem with fuzzy objective functions can be converted into:

max A


A ::> - .05882x 1 + .117 x 2 + .11764

A :;; .1429 x 1 + .0714x 2 - .5
21 <: x 1 + 3x 2
27 <: x1 + 3x 2
45 <: 4x1 + 3x 2
30 <: 3x 1 + x2
x 1 <: 0, x 2 <: °
Using the so-called minimum-operator, the solution is geometrically
presented in Fig. 9

Using the so-called product-operator, Fig. 10 results.






Let XS <-> x(o)

be degenerate. Then at least two various
(basis-indices, tableaus, basic feasible solutions) P~ can be
assigned to x(o). Every degenerate EP generates then a graph (a sub-
graph of the total graph assigned to the given convex polyhedron):

P = {p is I i = 1, .•. , k, k > 1 } .

To find all neighboring EP-s to a degenerate EP it is not necessary

i w. -s s
to find all nodes
. pEp
s . It lS proven that 3P C P such that
knowing all P~ E pS all neighbors can be determined.

In solving e.g. LVMP degeneracy may cause difficulties as is

mentioned in

P.L.Yu & M. Zeleny: "Linear Multiparametric Programming

by Multicriteria Simplex Method", Managem. Sci. 23,
1976, No.2.

We shall illustrate geometrically how to avoid a part of such

possible difficulties in terms of generating all EP-s of a convex
polytope X C R3 (the example shown is worked out by Joe Ecker).

In Fig. 11 the set X is represented. The corresponding (total) graph

is shown in Fig. 12. Using parts pS cps the graph in Fig. 13 results
(this is a planar graph).

Representing the subsets pS by respective single nodes (representa~

tive nodes), the graph in Fig. 14, results.
P ig .

F ig , 14

In the example given, to generate all EP-s of X, it suffices to

generate 8 nodes (tableaus, basic feasible solutions) instead of all
existing 14 nodes.

In this case to generate all neighbors to a degenerate EP it

suffices to find 2 nodes instead of 4.


The goal is to show that different concepts of the duality theory

for MCP are special cases of a more general theory.

Different pairs of dual problems were suggested by:

Gale, Kuhn, Tucker: (1), (2),
Kornbluth : 0), (4),
Isermann : (5), (6).

In [5] there is for each case constructed a vectorvalued Lagrangian

such that a generalized saddle-point to this Lagrangian determines
the optimal solution of the pairs of dual MCP-s, and vice-versa.

In what follows, the "generalized saadle-point" is defined.

Def. Let S eRn, T c ffi m, f: SxT ~ Rr be given. The following

equivalent properties define a generalized saddlepoint
(x o , uO)E SxT:

i) u 0 "solves" "Min" f(x 0 ,u) and x 0 "solves" "Max" f(x,u o )

ii) u 0 "solves" "Min" f(x 0 ,u) and V xES, uET f(x,u)tf(x 0 ,u o )

Now let L(x,u) = (Ctx) It + :!l (utAx) 1t be the generalized

Lagrangian for (1)-(4) and

G(x,D) = Ctx + DCb-Ax) be the Generalized Lagrangian for (5) and (6).
Then we have the following statements:

Theorem: i) °
L(x,u) has a generalized saddlepoint (x ,uo) in
(x,u) <: 0 iff
x "solves" (1) with some DO,y ° iff
u "solves" (2) with some DO,Vo iff
x "solves" (3) for some y ° iff
UO "solves" ( 4 ) for some v °
ii) G(x,U) has a generalized saddlepoint (xo,Uo) in
x <: 0, u vtU <: 0 iff
v >0
r: v.l =1

x I!solves" (5) iff
UO "solves" (6) .

These statements are a direct extrapolation of classical duality

theory: For the special case of a single instead of more than one
objective functions all the results reduce to the wellknown state-
ments in classical duality theory.


[1] Gal, T.: A General Method for Determining the Set of All
Efficient Solutions to a Linear Vectormaximum Problem.
European Journal of Operational Research forthcomming

[2] Gal, T. and H. Leberling: Redundant Objective Functions in

Linear Vectorvalued Optimizitation and their Deter-
mination. EJOR 1, 1977, 3, 176 - 184

[3] Gal, T. and H. Leberling: Relaxation Analysis in Linear Vector-

valued Optimization. Working Paper No 76/15, Inst. fUr
Wirtschaftswissenschaften, University of Aachen
November 1976

[4] Zimmermann, H.-J.: Fuzzy Programming and Linear Programming

with Several Objective Functions. Fuzzy Sets and
Systems, an Intern. Journal 1, 1977, No 1

[5] Gal, T.: Degenerate Polytopes, Related Graphs and an Algorithm.

Working Paper No 77/05. Inst. fUr Wirtschaftswissen-
schaften, University of Aachen, April 1977

[6] Radder, W.: A Generalized Saddle-Point Theory as Applied to

Duality Theory for Linear Vector Optimization Problems.
EJOR 1, 1977, 55 - 59

Pierre HANSEN and Michel DELATTRE

Faculte Universitaire Catholique de Mons, Belgium and

Institut d'Economie Scientifique et de Gestion, Lille, France.

Let 0 denote a set of N entities and D = (d kl ) a matrix of dissimi-
larities defined on OxO. The diameter of a partition of 0 into M
clusters is defined as the maximum dissimilarity between entities in
the same cluster, and the split of such a partition as the minimum
dissimilarity between entities in different clusters. A partition
of 0 into M clusters is called efficient if and only if there is no
partition of 0 into not more clusters with smaller diameter and not
smaller split or with larger split and not larger diameter. A grap~
theoretic algorithm which allows to obtain a complete set of effi-
cient partitions is described. Then are presented some experiments,
designed to evaluate the potential of bicriterion cluster analysis
as a tool for the exploration of data sets, i.e. for detecting the
underlying structure of 0, if and when it exists. Both real data
sets on psychological tests and on stock prices, and artificial da-
ta sets are considered. The ability of bicriterion cluster analysis
to detect the best clusterings in many cases and to show whether or
not there are some natural clusterings is clearly evidenced.


Cluster Analysis [1] [4] [10] [12] [19] is concerned with the very
general problem of grouping the entities of a given set into homoge-
neous and well-separated subsets, called clusters. To define a par-
ticular cluster analysis problem it is necessary to specify the desi-
red type of clustering - e.g. partition, covering or hierarchy of
partitions - and to make precise the concepts of homogeneity and se-
paration. This can be done in many ways. Indeed, quite a lot of
literature has focused upon cluster analysis problems; it has sug-
gested some exact algorithms as well as a large number of heuristics.
Most often, though, only homogeneity or sometimes only separation

of the clusters is ~aken into account. When both criteria are con-
sidered, it is usually only a posteriori, to evaluate the qualities
of the clusters which have been obtained. A truly bicriterion ap-
proach, providing a set of efficient clusterings, seems more adequa-
te in the many situations in which the trade-off between homogeneity
and separation is of interest.
In [6] [7] [8], such a bicriterion approach has been proposed for
partitioning the given set of entities with bottleneck-type criteria
for both homogeneity and separation. Thus, the values of the two
criteria depend on the maximum or minimum dissimilarity between
pairs of entities in the same or in different clusters. The results
obtained are summarized in the next section.
Cluster analysis has various purposes, the main ones being:
a) Classification. i.e. clustering to summarize information on large
sets of entities.
b) Prediction. i.e. clustering in order to be able to assign easily
new entities to one or another of the clusters obtained, and to pre-
dict their properties from those of the entities of that cluster.
c) Organization. i.e. clustering for operational reasons.
d) Exploration. i.e. clustering in order to reveal the underlying
structure of the given set of entities, if and when such a structure
Clustering for exploration is often a useful first step in the ela-
boration of scientific theories : the individual clusters may sug-
gest concepts and the clusterings may suggest hypotheses about the
given set of entities. The clusterings also make it possible to e-
valuate, usually in an informal way, the plausibility of given hypo-
theses or theories about the entities under study or about the popu-
lations they come from.
The main and largely unsolved problem is that of being able to dis-
tinguish between cases in which a clustering or individual cluster
corresponds to some structure of the given set of entities (or, more
precisely, of the data on that given set) and other cases in which
it is only a meaningless result automatically extracted by the algo-
rithm. A related and also difficult problem is to determine which is
the "best" number of clusters.
Some interesting attempts have been made by Ling [15], Ling and Kil-
lough [16], Hubert [11], Baker and Hubert [2] [3] and others to give
a formal answer to the question of the significance of a clustering.
The results obtained are based on random graph theory and some of

the underlying hypotheses appear to be very strong. Indeed, in the

null hypothesis, the dissimilarities between pairs of entities are
assumed to be randomly distributed and this may lead to configura-
tions which are not realisable in low-dimensional spaces i.e. which
may not correspond to any possible values for the data.
Whether an algorithm can provide natural clusterings often depends
on whether or not it retrieves such clusterings for well-studied re-
al or artificial data sets. One should also take into account the
importance of the differences between the values of the criteria
for those natural clusterings and for the other clusterings given
by the algorithm with the same data.
This paper describes the results of some experiments designed in or-
der to evaluate the potential of bicriterion cluster analysis as an
exploration tool. Section 3 analyzes two real data sets, one on
psychological tests and the other on stock prices and compares the
results with those obtained by other authors. Section 4 focuses on
artificial data sets with uniformly distributed points and with
points generated from multinormal distributions with different va-
riances. Some conclusions are drawn in section 5.


Let 0 = {01' O2 , ... , ON} denote a given set of N entities and

D = (d kl ) a matrix of dissimilarities between pairs of entities Ok
and 0 1 of 0 for k,l = 1, 2, •.• , N. The d kl are real numbers which
satisfy the conditions a) d kl > 0, b) d kl = d lk and c) d kk = 0 for
k,l = 1, 2, •.. , N. Anderberg [1], Benzecri [4], Jardine and Sib-
son [12] and Sneath and Sokal [19], among others, have discussed
ways to construct dissimilarities from measurements or observations
of characteristics inherent in the entities of O.
Let PM = {C 1 , C2 , .•. , CM} denote a partition of 0 into M classes,
or clusters; let PM denote the set of partitions of 0 into M
non-empty clusters and PM the set of partitions of 0 into at most
M non-empty clusters. Let us define the diameter d(C.) of a cluster
Cj of PM as the maximum dissimilarity between entities of Cj and the
diameter d(P M) of a partition PM as the maximum diameter of the clus-

ters of PM' A minimum diameter partition P~ of 0 into M clusters


thus verifies

Assume the indices of the clusters of the partitions PM of PM are

chosen in such a way that d(C 1 ) ~ d(C 2 ) ~ ..• ~ d(C M). By definition.
a lexicographicaZly minimum diameter partition P~:: = {C 1 • C2 • •.•• CM}
E PM is a partition such that PM contains no partition PM, = {C 1" • C2 •

...• C~} verifying for some non-negative integer p < M. d(C k ) = d(C~)
for k = 1. 2 •..•• p and d(C p +1 ) > d(C~+l)'
Let us define the split s(P M) of a partition PM as the minimum dissi-
milarity between entities in different clusters of PM' A maximum
split partition PM of 0 into M clusters thus verifies

Let us define a partition PM of 0 into M clusters as efficient if

, - ,
and only i f there is no partition PM E PM such that d(P M) < d(P M)
, ,
and s(P~) ~ s(P M) or such that s(P M) > s(P M) and d(P M) ..; d(PM)· Note
that PM may have less than M non-empty clusters.
Let us call two efficient partitions PM and P~ E PM equivalent if and
s(P M) = s(P M). Finally. let us call a set
S of efficient partitions of 0 complete if and only if any efficient
partition of 0 either belongs to S or is equivalent to an efficient
partition of Sand minimaZ if and only if no pair of efficient part i-
tions of S are equivalent. If two equivalent efficient partitions
have different numbers of non-empty clusters. the partition with the
least number of clusters will be chosen for inclusion into S.
The problem of determining a minimum diameter partition P~ of 0 into
M clusters was solved by Rao [17] for the case M = 2; Rao also propo-

sed a mixed-integer programming formul&tion. with many constraints

and variables. for the general case. An efficient algorithm for that
case is proposed in [8].

A complete graph(l) G = (X,E) is associated with the set 0 of enti-

ties; a vertex xk of X corresponds to each object Ok and a weight e-
qual to d kl is given to each edge {xk,x I } E E. A sequence of partial
graphs Gt = (X,E t ) of G, where Et = {e j = {xk,x I } I d kl ;> t} and t
takes the values of the d kl in decreasing order, are considered. The-
se partial graphs Gt are colored in a minimum number of colors, i.e.
in y(G t ) colors, where y(G t ) denotes the chromatic number of Gt . It
is shown that a minimum diameter partition of 0 into M clusters cor-
responds to the classes of vertices of the same color in an optimal
coloration of the last graph Gt colorabZe in M colors.
Graphs Gt may often be easily colored because when a new edge {xk'~}

is added to the current Gt the colors assigned to xk and xl in the

last coloring may be different, or a new coloring in the same number
of colors may be obtained by recoloration of xk or xl' When this is not
the case, a branch-and-bound coloring routine is called for, which
uses temporary fusions of vertices of the same color and temporary
suppressions of vertices of degree lower than the number of colors
currently used. This routine is interrupted as soon as a coloring in
the same number of colors as in the optimal coloring of the previous
Gt is found. If no such coloring exists a minimum diameter partition
has been obtained when coloring the previous Gt .
A within-core FORTRAN IV code implementing that algorithm is descri-
bed in [8]; it has allowed to solve some problems with real and with
artificial data involving up to 265 entities.
The computation time to obtain the minimum diameter partitions into
2 to 10 clusters with 265 entities ~Jas one hour on an IBM
370/125 computer. The storage required was 400 K-bytes. An in-core
out-of-core code for the same algorithm, presented in [6] and written

(1) The graph theory terminology used in this paper is in accordance with
that of "Graphs and Hypergraphs" by C. Berge [5].

in FORTRAN IV with an ASSEMBLER routine to store bitwise the adja-

cency matrix of Gt , has made it possible to obtain the minimum diame-
ter partitions into 2 to 15 clusters of a set of 600 entities. A va-
riant of the proposed algorithm, which allows to obtain lexicographi-
cally minimum diameter partitions is also outlined in [6].
Let P denote the set of all partitions of O. It is shown in [7] that
the number of distinct values of the split s(P) for PEP is at most
N-1 and these values are equal to the dissimilarities associated with
the edges of a minimum spanning tree of G = (X,E). It follows from

this result that the maximum split partitions PM of 0 are those which
are given by the well-known single-link hierarchical clustering al-
gorithm [13]. It is also clear that the split s(P) of a partition P
will be greater than or equal to the kth of its N-1 possible values
(assuming no ties) if and only if both endvertices belong to the sa-
me cluster for each of the k-1 smallest edges of the minimum span-
ning tree of G. Using this remark and the concept of reduced graph
an efficient algorithm for bicriterion cluster analysis can be ob-
tained [7]. The minimum diameter partitions P~ and maximum split par-

titions PM are first determined by the algorithm described above and

by the single-link clustering algorithm, for all useful values of M.
The values d(P~), s(P M) and d(P M) are noted (s(P~) is not necessari-
ly minimum for d(P~) given). The partition PM is efficient unless
an equivalent or better partition into less than M clusters exists.
A reduced graph GRt = (XR,E Rt ) is then constructed for the first
useful value of M. The vertices x Rj of XR are associated with the
sets of vertices of the clusters Cj of PM' An edge joins the verti-
ces x Rk and x RI of ERt i f and only if there exists a pair of enti-

ties Om E Ck and On E CI such that dmn ;;' t. The first value of t is

d(P M) . Then t is given the values of the dissimilarities dmn between
the entities of 0 in decreasing order. The reduced graphs GRt are
colored optimally by recoloration tests, or by the branch-and-bound

routine. If xm and xn belong to a set of vertices corresponding to

the same vertex of GRt (i.e. if the addition of the edge {xm'x n } in-
duces a loop in GRt ) or if GRt is no longer colorable in M colors an
efficient partition has been obtained and is defined by the color
classes of the optimum coloring of the previous GRt . Obtaining fur-
ther efficient partitions involves constructing a new reduced graph
GRt the vertices of which are associated with the clusters of the
maximum split partition into one more cluster than before. If this
new reduced graph is colorable in M colors (and therefore contains
no loop) new values of t are considered and new edges added as befo-
re. Otherwise another reduced graph GRt is constructed. The effi-
cient partitions into M clusters included in a minimal complete set
obtain when t = d(P~) and efficient clusters into one more set can
then be sought. A program for that algorithm has been written in
FORTRAN IV and has permitted to solve, entirely in core, problems
with up to 265 entities. Further results obtained with this program
are presented in the remainder of this paper.


The first data set studied concerns 24 psychological tests taken by

145 children. The matrix of intercorrelations between these tests
is given in Harman's book on Modern Factor Analysis [9]. The dissi-
milarity chosen is equal to 1 minus the intercorrelation. The diame-
rers d and splits s of a complete set of efficient partitions into 2
to 10 clusters are given in table 1 and plotted on the diameter-split
map of figure 1. Differences (d-s) /d m, where dm is the mean dissi-
milarity, and ratios q/s are also listed; for each M the best parti-
tion for these values is noted by an asterisk. In order to evaluate
the best number M of clusters, the differences ~(d-s) / dm between

the smallest (d-s) /d m, for partitions into less than M clusters and
into M clusters are noted when positive. The differences~(d/s) defi-

ned analogously are also noted. For both these criteria, the only
efficient partition 61 into 6 clusters appears to be the most inte-
resting. This is confirmed by the position of 61 on the diameter-
split map : a large decrease in diameter between 51 and 61 is obser-
ved with a small decrease in split.
The names of the tests, the partition into 5 clusters obtained by
Harman with his B-faatop method and partition 6 1 are given in table
2. Harman associates factors, also noted in table 2, with the first 4
clusters obtained; two factors are associated with cluster 4 and none
with cluster 5. In partition 6 1 both these clusters are divided, clus-
ter 4 into 2 clusters, noted 5 and 6, corresponding to the two fac-
tors of Harman and cluster 5 into 2 parts aggregated to Harman's
clusters 2 and 3 (clusters 3 and 4 of 61 ), The similarity of the na-
mes of tests 20, 22 and 23 with those of Harman's cluster 2 (e.g.
comprehension, deduction and reasoning; sentence completion and se-
ries completion) and of the names of tests 21 and 24 with those of
Harman's cluster 3 (Le. addition, code, counting dots, numerical
and arithmetical puzzles) is worth noting. As the precise content
of the tests is not described in Harman's book, any deeper interpre-
tation of the results must be left to psychologists.

Table 1. 24 psycPological tests

Partition Edges d s dis ll(d/s) (d..,s)/dm ll(d-s)/dm

21 5 0.934 0.597 1.56 0.48

22 7 0.905 0.588 1.54:: 0.45::
23 12 0.888 0.526 1.69 0.52
24 16 0.876 0.415 2.11 0.66
31 14 0.884 0.526 1.68 0.51
32 34 0.833 0.511 1.63:: 0.46::
33 37 0.828 0.496 1.67 0.48
34 38 0.828 0.286 2.90 0.78
35 47 0.819 0.278 2.95 0.77
41 54 0.808 0.511 1.58:: 0.43:: 0.02
42 70 0.774 0.286 2.71 0.70
51 74 0.770 0.511 1.51:: 0.03 0.3i: 0.06
52 133 0.715 0.277 2.58 0.63
61 160 0.683 0.511 1.34:: 0.17:: 0.25::
71 168 0.676 0.511 1.3l: 0.02 0.24:: 0.01
81 187 0.655 0.511 1.28:: 0.04 0.21:: 0.03
91 197 0.645 0.511 1.26:: 0.02 0.19:: 0.02
101 198 0.643 0.511 1.26:: 0.19::
102 200 0.642 0.497 1.29 0.21
103 213 0.620 0.488 1.27 0.19
104 217 0.614 0.381 1.61 0.33

Fi~ 1. Diameter-split map ! 24 psychological tests

1.07 t-----------------------""1


0.942 -21

.23 • 22
0.875 ·24 -31

- 32
-34 - 33
0.809 35
- 41
~42 - 51
0.676 -?1
0.610 -104 -103



s ...
0.410 ,. ,.
0.277 0.371 0.464 0.558 0.652

Table 2. 24 psychological tests

I Test B-coeff. 61 bicrit. Factor

partition partition associated

1. Visual Perception 1 1
2. Cubes 1 1 Spatial
3. Paper Form Board 1 1 Relations
4. Flags 1 2

5. General Information 2 3
6. Paragraph Comprehension 2 3
7. Sentence Completion 2 3 Verbal
8. Word Classification 2 3
9. Word Meaning 2 3

10. Addition 3 4
11. Code 3 4 Perceptual
12. Counting Dots 3 4 Speed
13. Straight-Curved Capitals 3 4

14. Word Recognition 4 5

15. Number Recognition 4 5
16. Figure Recognition 4 Recognition
17. Obj ect-Number 4 5 Associative
18. Number-Figure 4 6 Memory
19. Figure-Word 4 6

20. Deduction 5 3
21. Numerical Puzzles 5 4
22. Problem Reasoning 5 3
23. Series Completion 5 3
24. Arithmetic Problems 5 4

The second data set studied concerns stock price behaviour for 63
securities, and is given in a paper by King [14]. This author uses
cluster analysis to test the hypothesis of existence of sectorial
factors which explain stock prices after extraction of a market fac-
tor. To this effect the hierarahiaaZ aentro!d aZustering method is
applied to the matrix of residual correlations between the time se-
ries of monthly stock prices from May 1927 to December 1960. The
list of securities, the 6 sectors to which they belong, and the par-
tition into 6 clusters obtained by King are given in table 4.
The same dissimilarity as for the problem of Harman was used when
applying bicriterion cluster analysis. The characteristics of the
efficient partitions into 2 to 10 clusters of a complete set are gi-
ven in table 3 and the corresponding diameter-split map is represen-
ted on figure 2. It is seen that partitions 51 and 61 appear as the
most interesting ones there is only one efficient partition into 5
clusters and one into 6 clusters, and both 51 and 61 correspond to
large values of 6(d-s) / dm and of 6(d / s). The clusters of these
partitions are also listed in table 4.
Partition 51 agrees with King's hypothesis for the sectors of petro-
leum, metals and rails and partition 6 1 for the utilities sector as
well. The securities from the tobacco and stores sectors are mixed;
2 of them were also misclassified by the centroid method. This sug-
gests some common factor could explain the prices behavior for these
2 sectors. Partitions 61 appears to be more strongly established
than 51 as the corresponding graph Gt contains 1217 edges against
655 edges for the Gt of 51' For 61 all residual correlations bet-
ween stock prices of securities within the same cluster are (almost)
positive. Both efficient partitions 51 and 61 have smaller diame-
ters than King's partition into 6 clusters, for which d = 1.000561
and equal splits. That partition is therefore not efficient. King
also notes that the heuristic centroid clustering algorithm does

not possess a stopping rule indicating the best number of clusters.

Table 3. Evolution of 63 stock prices

Partition Edges d s dis A(d/s) (d-s)/dm A(d-s)/dm

21 5 1.001750 0.999074 1.002678 0.002676

22 18 1.001258 0.998758 1.002503 0.002500
:: )C
23 23 1.001137 0.998640 1.002500 0.002497

31 33 1.001045 0.998640 1.002408 0.002396

32 98 1.000816 0.998579 1.002240 0.002237
33 129 1.000718 0.998522 1.002199': 0.000301 0.002196 0.000301
34 131 1.000712 0.998505 1. 002210 0.002207
35 134 1.000711 0.998465 1.002249 0.002246

41 211
1.000611 0.998579 1.002035 0.000164 0.00203l: 0.000164
42 276 1.000542 0.998311 1.002235 0.002231
43 331 1.000484 0.997706 1.002784 0.002778
44 441 1.000398 0.996971 1.003437 0.003427
51 655 1.000282 0.998579 1.001705': 0.00033d: 0.001703 0.000329':
61 1217 1.000001 0.998579 1. 001424': 0.000281 0.001422': 0.000281
71 1290 0.999956 0.998579 1.001379': 0.000045 0.001377': 0.000045
81 1433 0.999863 0.998579 1.001286': 0.000093 0.001284" 0.000093
91 1535 0.999756 0.998579 1.00179': 0.000107 0.001177': 0.000107
101 1550 0.999735 0.998579 1.001158': 0.000021 0.001156': 0.000021
102 1572 0.999716 0.998522 1.001196 0.001194

Figure 2 : Diameter split map

Evolutions of 63 stock prices

d I'

• 21


_ 23• 22
• 31
1. 09
33 • 32
35 34 • 41
1• 04 -44







0.678 ,.
0.697 0.761 0.826 0.890 0.954

Table 4 King's data Evolutions of 63 stock prices

King's partition 1. partition 61 partition

1. American Snuff 1 1 1
2. American Tobacco 1 1 1
3. Bayuk Cigars 6 5 6
4. Consolidated Cigar 6 5 6
5. General Cigar 1 1 1
6. G.W. Helme 1 1 5
7. Liggett and Myers 1 1 1
8. P. Lorillard 1 1 1
9. Philip Morris 1 5 6
10. Reynolds Tobacco 1 5 1
11. U.S. Tobacco 1 5 1
12. Continental Oil 2 2 2
13. Standard Oil (N.J.) 2 2 2
14. Texaco Inc. 2 2 2
15. Atlantic Refining Co 2 2 2
16. Pure Oil 2 2 2
17. Shell Oil 2 2 2
18. Skelly Oil 2 2 2
19. Socony Mobil Oil 2 2 2
20. Sun Oil 2 2 2
21. Tidewater Oil Co 2 2 2
22. Union Oil of California 2 2 2

23. Republic Steel 3 3 3
24. American Smelting and Ref. 3 3 3
25. American Steel Foundries 3 3 3
26. Bethlehem Steel 3 3 3
27. Calumet and He cia 3 3 3
28. Inland Steel 3 3 3
29. Inspiration Cons. Copper 3 3 3
30. Interlake Iron Corp. 3 4 4
31. Mag}ll8. Copper 3 3 3
32. U.S. Steel 3 3 3
33. Vanadium Corp. of America 3 3 3

Table 4 (suite) King's data Evolutions of 63 stock prices

King's partition 51 partition 61 partition

34. Chesapeake and Ohio 4 4 4
35. Southern Pacific 4 4 4
36. Atchison Topeka and Santa Fe 4 4 4
37. Louisville and Nashville 4 4 4
38. Kansas City Southern 4 4 4
39. Missouri Kansas Texas 4 4 4
40. Northern Pacific 4 4 4
41. Union Pacific 4 4 4
42. New York Central 4 4 4
43. Reading Co. 4 4 4
44. Allegheny Power System 5 1 5
45. American and Foreign Power 5 1 5
46. Brooklyn Union Gas 5 1 5
47. Columbia Gas System 5 1 5
48. Consolidated Edison of N.Y. 5 1 5
49. Laclede Gas Co. 6 4 4
50. Peoples Gas 5 1 5
51. Southern California Ed. 5 1 5
52. Detroit Edison 5 1 5
53. Pacific Gas and Electric 5 1 5
54. Montgomery Ward 6 5 1
55. City Stores 6 5 6
56. Arnold Constable 6 5 6
57. Associated Dry Goods 6 5 6
58. Gimbel Bros 6 5 6
59. S.S. Kresge 6 1 1
60. S.M. Kress 6 1 1
61. May Department Stores 6 5 6
62. Outlet Co. 6 4 6
63. Sears Roebuck 6 5 1
d = 1.000561 d = 1.000282 d = 1.000001
s = 0,998579 s = 0,998579 s = 0,998579


Systematic experiments have been carried out with artificial data

to find out a) whether the bicriterion cluster analysis algorithm
does not detect a structure when the data are random and b) whether
it does correctly determine the structure of the data when there is
A set of 75 points in R2 was randomly generated and the efficient
partitions into 2 to 10 clusters were determined. The characteris-
tics of these partitions are given in table 5 and represented in
figure 3. The number of efficient partitions in a complete set is
49 and there are at least 2 efficient partitions for each M. The
diameters d decrease almost monotonously as well as, to a lesser
degree, the splits s. The ratios dis are high, i.e. always larger
than 4. The best values dis for each M decrease continuously while for structured
data one or several minima separated by larger values are observed.

Table 5. Unifonn data : 75 points in R2

Partition d s dis (d-s)/dm

21 103.93 14.850 6.999 1. 723
22 102.88 14.167 7.262 1.716
23 102.56 12.385 8.281 1. 744
24 102.35 5.0615 20.221 1.882
31 77.885 14.167 5.498 1.233
32 77.024 9.2044 8.368 1.312
41 72.820 14.167 5.140 1.135
42 70.367 14.117 4.984 1.088
43 67.170 11.422 5.881 1.078
44 61.299 9.3372 6.565 1.005
45 58.918 7.1512 8.239 1.001
46 58.686 6.5271 8.991 1.009
47 58.551 3.7793 15.493 1.060

Table 5 (suite). Uniform data: 75 points in R2

Partition d s dis (d-s)/dm

51 70.283 14.117 4.978 1.086

52 67.170 13.968 4.809 1.029
53 61.383 12.864 4.772 0.939
54 61.299 10.039 6.106 0.992
55 53.361 9.3372 5.715 0.852
61 61.299 12.864 4.765 0.937
62 59.713 11.478 5.202 0.933
63 54.405 9.8641 5.515 0.862
64 52.370 9.5502 5.484 0.828
65 50.469 9.3372 5.405 0.796
66 49.959 8.3660 5.972 0.805
67 49.835 8.0362 6.201 0.809
68 49.383 7.3530 6.716 0.813
69 48.764 6.5559 7.438 0.816
610 47.899 3.8599 12.409 0.852
611 46.778 3.7026 12.634 0.833
71 44.948 9.7239 4.622 0.681
72 42.733 8.1068 5.271 0.670
73 42.563 5.2870 8.051 0.721
74 41. 487 5.1462 8.062 0.703
81 44.691 9.7239 4.596 0.676
82 41.964 9.5502 4.394 0.627
83 41. 793 8.1068 5.155 0.652
84 39.365 6.7852 5.802 0.630
85 37.761 6.3904 5.909 0.607
86 37.585 5.1462 7.303 0.628
91 41.912 9.5502 4.389 0.626
92 37.916 9.3372 4.061 0.553
93 37.761 8.3529 4.521 0.569
94 37.388 8.1068 4.612 0.566
95 37.384 7.450G 5.018 0.579
96 35.722 6.7852 5.265 0.560
101 37.39 9.337 4.004 0.543
102 34.27 8.353 4.103 0.501
103 34.09 8.107 4.206 0.503
104 32.83 8.036 4.086 0.480

Figure 3 : Diameter split-map - Uniform data


.24 .23 -22


76.4 • 32
51 .52
44 -<54
.47 .46 .45


44.3 81 •71
74 ... :82
86 .85.
103 _e102



1.53 6.11 10.7 15.3 19.9

Table 6. MultinonnaJ. data : 75 points in R2 in 4 groups (s/2)

Partition Edges d s dis (d-s)/dm

21 1250 68.2 56.8 1.20 0.20
31 1380 64.4 40.0 1.61" 0.42
41 2092 19.4 36.7 0.53:: -0.30

51 2094 19.37 8.8 2.2d: 0.18

52 2096 10.1 5.3 3.60 0.24

61 2101 17.7 5.3 3.32:: 0.21

62 1202 17.6 4.4 4.00 0.23
63 2105 17.0 3.5 4.86 0.23

71 2116 15.4 3.5 4.4d: 0.20

81 2119 15.2 3.5 4.34:: 0.20

82 2122 14.4 3.1 4.65 0.19

91 2122 14.4 3.5 4.11:: 0.19

92 2126 14.14 3.14 4.50 0.19
93 2127 14.11 3.12 4.52 0.19
94 2134 13.7 2.8 4.89 0.19

101 2126 14.1 3.5 4.00:: 0.18

102 2134 13.7 3.1 4.42 0.18
103 2187 11. 7 2.8 4.18 0.15
104 2191 11.5 2.4 4.79 0.16

Table 7. Multinormal data: 75 points in R2 in 4 groups (si)

Partition Edges d s dis (d-s)/drn

21 808 78.7 41.2 1.91 " 0.63"

31 857 77.3 16.1 4.8d: 1.03::

32 875 76.7 7.5 10.23 1.16
33 985 74.1 5.7 13.00 1.15
34 986 74.1 5.0 14.80 1.16

41 2047 41.8 16.1 2.60:: 0.43::

51 2065 39.5 13.5 2.93:: 0.44::

61 2077 35.5 11.0 3.23:: 0.41::

62 2083 34.0 9.0 3.78 0.42
63 2085 33.7 5.8 5.81 0.47
64 2087 33.7 5.2 6.46 0.48

71 2097 31.5 11.0 2.86:: 0.34"

72 2118 28.5 8.1 3.52 0.34

81 2118 28.5 8.5 3.35:: 0.34

82 2129 27.3 7.5 3.64 0.33::

91 2153 23.2 7.5 3.07:: 0.26::

92 2164 22.6 5.7 3.96:: 0.28::

101 2168 22.1 5.7 3.88:: 0.2i:


Table 8. Multinorma1 data : 75 points in R2 in 4 groups (2 si)

Partition Edges d s dis (d-s)/dm

21 38 142.6 26.7 5.34 1. 72

22 469 105.1 21.5 4.89:: 1.24::
23 480 104.4 10.7 9.76 1.39
24 491 103.2 10.0 10.32 1.39

31 133 130.8 21.7 6.03 1.62

32 480 104.4 21. 5 4.85:: 1.23 ::
33 506 102.3 13.1 7.81 1.33
34 538 100.6 12.5 8.05 1.31
35 541 100.6 10.0 10.06 1.35
36 576 98.8 9.3 10.62 1.33
41 1234 71.1 13.1 5.43 :: 0.86::

51 1471 63.3 13.1 5.14 :: 0.75 ::

52 1517 61.8 9.3 6.65 0.78
61 1522 61. 7 13.1 4.71:: 0.72
62 1719 53.5 10.0 5.35 0.65 ::
71 1608 58.6 13.1 4.47 0.68
72 1728 53.1 12.5 4.25 :: 0.60::
73 1729 53.1 9.5 5.59 0.65
74 1734 52.8 9.3 5.68 0.65
75 1768 51.5 9.1 5.66 0.63
76 1771 51.4 7.9 6.51 0.65
77 1835 49.2 6.5 7.57 0.64
81 1833 49.3 12.5 3.93 :: 0.55
82 1835 49.2 11.2 4.39 0.57
83 2021 41.9 9.5 4.41 0.48 ::
91 1835 49.2 12.5 3.94 0.55
92 2056 39.9 9.5 4.20 0.45 ::
93 2066 39.6 6.7 5.91 0.49
101 2073 39.2 9.3 4.22 :: 0.44 ::
102 2115 37.0 6.7 5.52 0.45
103 2119 36.7 6.6 5.56 0.45
104 2167 34.7 4.3 8.07 0.45

Structured data were obtained from the well-known data of Ruspini

[18] which consists of 4 clusters of points in R2. The means xi and
standard deviations si' i = 1, 2, 3, 4 of the coordinates of these
clusters'points were determined. Then 3 sets of clusters were ran-
domly generated from normal distributions with the same means x.,

the same number of points but standard deviations si multiplied by

0.5, 1 and 2 respectively. Tables 6, 7 and 8 contain the lists of
efficient partitions into 2 to 10 clusters for these 3 data sets.
The data and the corresponding diameter-split maps are available
from the authors. It is seen that the number of efficient parti-
tions increases sharply when the data becomes less classifiable,
i.e. when the standard deviations are doubled. The dis also increa-
se with the standard deviations.
The fact that M =4 is the best number of clusters is clearly indi-
cated by the obtention of a single efficient partition 41 into 4
clusters for all 3 data sets. For the first 2 data sets a single
efficient partition 21 into 2 clusters is also obtained and corres-
ponds to good values of dis and (d-s)/dm. The best values for
~d/s and ~(d-s)/dm also indicate that the efficient partitions 41
into 4 clusters are the most natural ones. These experiments have
been repeated with stable results.


Only tentative conclusions can be inferred from such a small number

of experiments. However it appears clearly at this stage that
a) The bicriterion cluster analysis algorithm proposed in [7] and
used here yields in reasonable computation times complete sets of

efficient partitions into up to 10 clusters both for real and for

artificial data sets of up to 100 entities. It is therefore one of

the few exact algorithms of cluster analysis which are operational.

b) Several indications help to see if the data under study is struc-

tureless : large number of efficient partitions, more than 1 or 2
efficient partitions for all M, absence of gaps in the diameter-spl~

c) Several indications help to estimate which is (or are) the best
efficient partition(s) when the data possesses some structure : 1
or 2 efficient partitions for that M, small values for (d-s)/dm and
for dis among the efficient partitions into M clusters, large values
for ~(d-s)/dm and for ~d/s, gap in the diameter-split map.
Further experiments will perhaps suggest more refined formulae to
estimate which are the best partitions, e.g. formulae taking into
account the number of clusters, the dimensionality of the data andl
or the type of dissimilarity used. Substantive analyses with data
from various fields are also needed to see how much insight the set
of efficient partitions provides, from the practitioner's point of


[1] Anderberg, M.R., "CZuster AnaZysis for AppZications", New-York:

Academic Press (1973).
[2] Baker, F.B. and L. Hubert, "Measuring the Power of Hierarchical
Cluster Analysis", J. Amer. Stat. Assoc., 70 (1975) 31-38.
[3] Baker, F.B. and L. Hubert, "A Graph-Theoretic Approach to Good-
ness-of-Fit in Complete-Link Hierarchical Clustering", J. Amer.
Stat. Assoc., 71 (1976) 870-878.
[4] Benzecri, J.P. (et collaborateurs), "L'Analyse de donn§es, 1.
La Taxinomie" , P.aris : Dunod (1973).
[5] Berge, C., "Graphes et hypergraphes", Paris: Dunod (1970),
English translation, Amsterdam: North-Holland (1973).
[6] Delattre, M. et P. Hansen, "Classification d'homog§n§it§ maxi-
mum", Actes des Journ§es "AnaZyse de Donnees et Informatique",
Versailles, septembre 1977, I, 99-104.
[7] Delattre, M. and P. Hansen, "Bicriterion Cluster Analysis", sub-

[8] Hansen, P. and M. Delattre, "Complete-Link Cluster Analysis by

Graph Coloring", J. Amer •. Stat. Assoa. (forthcoming).
[9] Harman, H.H., "Modern Faator Analysis". Chicago: University of
Chicago Press (1967).
[10] Hartigan, J.A., "CZustering AZgorithms", New-York: Wiley (1975).
[11] Hubert, L., "Approximate Evaluation Techniques for the Single
Link and Complete Link Hierarchical Clustering Procedures", J.
Amer. Stat. Assoa. 69 (1974) 698-704.
[12] Jardine, N. and R. Sibson, "MathematiaaZ Taxonomy". London:
Wiley (1971).
[13] Johnson, S.C., "Hierarchical Clustering Schemes", Psyahometri-
ka. 32 (1967) 241-254.
[14] King, B.F., "Market and Industry Factors in Stock Price Beha-
viour", JournaZ of Business. 39 (1966) 139-190.
[15] Ling, R.F., "A Probability Theory of ClUster Analysis", J.
Amer. Stat. Assoa •• 66 (1973) 159-164.
[16] Ling, R.F. and C.G. Killough, "Probability Tables for Cluster
Analysis Based on a Theory of Random Graphs", J. Amer. Stat.
Assoa •• 77 (1976) 293-299.
[17] Rao, M.R., "Cluster Analysis and Mathematical Programming",
J. Amer. Stat. Assoa •• 66 (1971) 622-626.
[18] Ruspini, E.H., "A New Approach to Clustering", Information and
aontroZ. 15 (1969) 22-32.
[19] Sneath, P.H. and R.R. Sokal, "NumeriaaZ Taxonomy". San Francis-
co : W.H. Freeman and Company (1973).
Duality in Multiple Objective Linear Programming

H. Isermann
Fakult~t fur Wirtschaftswissenschaften
Universit~t Bielefeld

The paper relates three duality concepts in multiple objective
linear programming - the concepts of Gale-Kuhn-Tucker, Isermann and
Kornbluth - to each other and indicates some decision-oriented impli-
cations of duality.

Duality concepts in multiple objective linear programming were de-
veloped for the first time by Gale, Kuhn and Tucker [2] as early as
1951. They considered a pair of general matrix problems of linear pro-
gramming, i.e. a linear programming problem with a matrix-valued linear
objective function, and established some theorems of duality and exist-
ence. As the matrix problems of linear programming contain the linear
programming problems with a vector-valued as well as a scalar-valued
objective function as special cases the developed theory comprises the
respective theoretical framework for vector problems of linear pro-
gramming as well as for ordinary linear programming problems. Schon-
feld [11] slightly supplemented this duality concept. He also extended
the duality concept of G"ale, Kuhn and Tucker to multiple objective non-
linear programming problems [12].
Kornbluth [8] analyzed a dual pair of linear homogeneous parametric
vector problems of linear programming. His duality statements were
supplemented by Radder [10] who also imbedded this duality concept in-
to a generalized saddlepoint theory. A further duality concept was ela-
borated upon by Iserman [5,7].
The purpose of this paper is to relate the different duality con-
cepts to each other and to show how duality may be employed in order
to "solve" a multiple objective linear programming problem.
Before going further, for convenience, let us introduce the follow-
ing notation: R denotes the set of the real numbers, Ro the set of the
nonnegative real numbers and R+ the set of the strictly positive real
numbers. Let both G and H be em x n)-matrices. With regard to matrix
inequalities the following convention will be applied:

G ~ H, if and only if, gij ~ h .. for all i = 1, ... ,m; j = 1, .•• ,n

G ~ H, if and only if, G ~ H and G t H
G > H, if and only if, gij > h .. for all i = 1, ... ,m; j = 1, ... ,n.
The same rules naturally apply to vector inequalities. The transpose
of a matrix or a vector will be denoted by an upper index T.


Gale, Kuhn and Tucker [2] examined the following two general prob-
lems of linear programming - each of which is based on the same given
information: an(m x n)-matrix A, an(m x r)-matrix B and a (k x n)-ma-
trix C - with D being a (k x r)-matrix of variables and X,y,U,v being
variables in (1) and (2), respectively:
"max" {D Ax = By, Cx ~ Dy, x E Rn y E R:} (1)
"min" {D I ATu ~ cTv, BTu ~ DTv, uO~ Rm, v e: R~}. (2)

Let (xO,yO,Do) be a feasible solution for (1). Then (xO,yO,Do) is said

to be an effiaient or nondominated solution for (1), if and only if,
there is no other feasible solution (x' ,y' ,D') for (1) such that
D' > DO. If (xO,yO,Do) is an efficient solution for (1), then DO is
said to be maximat for (1) under partial ordering by the rules of ma-
trix inequalities introduced above. Problem (1) may be considered as
the problem of enumerating all maximal DO. An analogous solution con-
cept applies to problem (2).
Let k > 1, r = 1 and y = 1. Then B becomes an(m x 1)-vector, b, and
D becomes a (k x 1)-vector of variables, d, and the two general matrix
problems of linear programming reduce to the following vector problems:
"max" {d Ax = b, Cx ~ d, x e: R~}
T T T T m k
"min" {d A u ~ C v, b u ~ d v, u E R , v e: R+}.
If k = 1, r = 1, Y = v 1, B becomes an (m x 1)-vector, b, C becomes a
(1 x n)-vector, c, and D becomes a scalar-valued variable, p. Then the
two general matrix problems of linear programming reduce to a dual pair
of ordinary linear programming problems:
max {p
min {p

The main results of Gale, Kuhn and Tucker which are of interest in this
context are stated in the following theorem:
Theorem 1: Consider the probtems (1) and (2).
(i) (1) has an effiaient sotution. if and onty if. (2) has an effi-

aient soZution.
ii) Let (xO,yO,Do) be an effiaient soZution fop (1). Then thepe ex-
ists an effiaient soZution (uC,v*,D*) fop (2) suah that DO = D*.
An anaZogous statement hoZds fOP (2).
:iii) If (xO,yO,Do) is an effiaient soZution fop (1). then Cxo = DOyo.
An anaZogous statement hoZds fop (2).
(iv) Let (xO,yO,Do) and (uo,Vo,Do) be feasibZe soZutions fop (1) and
(2). pespeat~veZy. w~th Cx ° = D°Y°• BT ° Then (x0 ,y °,D°)
u o=oDT v.
and (uo,vo,Do) ape effiaient soZutions fop (1) and (2). pespea-
tive Zy.


The following dual pair of vector problems of linear programming
was examined in [7]:
'!max'! {z = Cx Ax = b, x e: R~} (5)
'!min'! {h = Ub UAw < Cw for no w E Rno }'
In (6) U is a (k m)-matrix of variables. For k = 1 (5) and (6) re-
duce to a dual pair of linear programs [7] •
The existence and duality properties of (5) and (6), which are of
interest in this context are summarized in the following theorem:
Theorem 2: Considep the probZems (5) and (6).
(i) The foZZowing statements are equivaZent:
(1) Both (5) and (6) have a feasibZe soZutionj
(2) both (5) and (6) have an effiaient soZution and
thepe exists at Zeast one pair (xo,Uo) of effiaient soZutions
suah that Cxo = UObj
(3) the Zinear program min {bTu I ATu - CTv ~ 0, v ~ 1} has an
optimaZ soZution (u,v).
(ii) X is an effiaient soZution for (5). if and onZy if. there exists

a feasibZe soZution UO for (6) suah that Cx o = UOb. UO is then

itseZf an effiaien.t soZution for (6).
(iii) UO is an effiaient soZution for (6). if and onZy if. there ex-
ists a feasibZe soZution xO for (5) suah that Cxo = UOb. XO is
then itseZf an effiaient soZution fop (5).
As will be illustrated in Section 3.1 the dual vector problem (6)
turns out to be that problem which will be simultaneously solved when
solving the vector maximum problem (5) by a multiple objective simplex
method. The relationship between the pair of problems (5) and (6) and
the respective pair of vector problems of Gale, Kuhn and Tucker is

stated in the following theorem:

Theorem 3: Consider the vector probZems of Zinear programming (3) and
(4) as weZZ as (5) and (6).
(i) XO is an efficient soZution for (5), if and onZy if, (xo,do) with
dO = Cxo is an efficient soZution for (3).
(ii) UO is an efficient soZution for (6), if and onZy if, there exists
a vO E R~ such that (uo,vo,do) with uO = UOTvo and dO = UOb is an
efficient soZution for (4).
Proof. As statement (i) is obvious, we shall only prove (ii). Let UO be
a feasible solution for (6). Then the system UOAw < Cw, w E Rn has no
- °
solution wand by Motzkin's theorem of the alternative ([9], pp. 28)
the system vT UAo~ T
v C, v E R+k has a solutlon
. v°. Let u° = UoT v° and
dO = UOb. Then (uo,vo,do) is a feasible solution for (4). Now, let
(Uo,vo,d*) be a feasible solution for (4). If bTu o d*TvO we put
d = d* ° ° °
; .If b Tu < d*T v we construct d ~ d * such that b T °
u o=odT v
holds. Obviously, (uo,Vo,do) is a feasible solution for (4). From
uO,vo,do we can construct a (k x m)-matrix UO the coefficients of which,
Uj~' are a solution of the m + k linear equations

t v ~u. ~
~=1 J
(j = 1, ... ,m) ,

t b.u.~ dO (~ = 1, ... ,k).
j =1 J J ~

Note, that dO = UOb.Asvosolvesthesystem vTUoA ~ v'l'c, v E R~, by Motz-

kin's theorem of the alternative the system UOA; ~ Cw, w E R~ has no
solution wand UO is a feasible solution for (6).
Now, let UO be an efficient solution for (6) and assume, to the
contrary, that (uo,vo,do) with UO = UO.Tvo and dO = UOb is no efficient
solution for (4). Then there exists a feasible solution (u' ,v' ,d') for
(4) with bTu' = d,Tv ' such that d' ~ dO and, by the above argument, a
feasible solution U' such that U'b = d' ~ dO = UOb, which,however, is
in contradiction to the efficiency of UO. Let (uo,vo,do) be an effi-
cient solution for (4) and assume, to the contrary, that the respective
feasible solution UO with UOb = dO and UoTvo = u O is not an efficient
solution for (6). Then there exists a feasible solution U' for (6) and,
by the above argument, a feasible solution (u' ,v' ,d') for (4) with
u' = U,Tv ', d' = U'b and d' ~ dO which, however, implies a contradiction
to the efficiency of (uo,vo,do). This completes the proof.


Kornbluth [8] discussed the following pair of multiple objective
linear programming ppoblems:
"max" {z = Cx I Ax = By, x E Rn , Y E R+}
r (7)
T T TOm k
~min~ {g = B u I A u ~ C v, u E R , v E R+}. (8)
Either problem may be regarded as ~ homogeneous multiparametric vector
problem of linear programming with r, respectively k strictly positive
parameters at the right-hand-side. Thus, (XO,yo) is said to be an effi-
cient solution for (7), if and only if, there is no feasible (x' ,yo)
such that Cx' ~ Cxo holds, and (uo,vo) is said to be an efficient solu-
tion for (8), if and only if, there is no feasible (u' ,vo) such that
BTu' ~ BTUo. Kornbluth directs his attention to properly efficient solu-
tions ([8], p. 602) for (7) and (8). However, for each vector problem
of linear programming each efficient solution is also properly efficient
[~ ].

The following theorem states the close relation between the problems
(7) and (8) and the pair of matrix problems of linear programming (1)
and (2).
Theorem 4: Consider the problems (1) and (2) as well as (7) and (8).
(i) (XO,yo) is an efficient solution for (7), if and only if, there
exists a(k x r}ma~rix DO such that (xO,yO,Do) is an efficient solu-
tion for (1);
(ii) (uo,vo) is an efficient solution for (8), if and only if, there
exists a(k x r~matrix DO such that (uo,vo,Do) is an efficient solu-
tion for (2).
Proof. Let (xO,yO,Do) be an efficient solution for (1) and assume, to
the contrary, that (xO,yo) is not an efficient solution for (7). Then
there exists a feasible solution (x' ,yo) such that Cx' ~ Cxo. Let
i' E {1, ... ,k} be such that c ,x' > c ,xo where c , denotes the i'-th
row ~ector of C. Then (x' ,y ° ,D')
i with i d' = d° ,for
i all i = 1, ... ,k
iq iq °
and all q = 1, ... ,r such that (t,q):j: (i',q') and d'i'q' = dt'q' +
x' _ c T ' x °
i is because of Cx' ~ D'Yo a feasible solution for (1)
which, however, leads to the contradictory inequality D' ~ DO.
Let (XO,yo) be an efficient solution for (7). Then according to state-
ment (ii) of Theorem 2 there exists a (k x m)-matrix UO such that the
UOAw < Cw, w E R~ has no solution w, which implies
~ Cx for no x E Ro '
~ Cx for no (x,y) E {(x,y) I Ax = By, x E R~, Y E R~},

and by substituting DO = UOB

DOy ~ Cx for no (x,y) E {(x,y) I Ax = By, x E R~, Y E R:}. (9)
Suppose, (xO,yO,Do) is not efficient for (1). Then there exists a
feasible solution (x' ,y',D') for (1) such that D' ~ DO. As (x' ,y' ,D')
is feasible for (1) Cx' ~ D'y' ~ DOy' holds, which, however, contra-
dicts (9). Hence (xO,yO,Do) is an efficient solution for (1). This
completes the proof as a similar argument may be applied to prove state-
ment (ii).
Theorem ~ implies that the existence and duality theory for the
pair of problems (1) and (2) ap·plies to the pair of problems (7) and
(8) •
Moreover, the second part of the proof of Theorem ~ discloses, that the
set of all efficient solutions for problem (1) and hence for problem
(7) is easily determined by a slightly modified version of a multiple
objective simplex method [1,13,1~,6]. We shall outline this point in
Section 3.2.


The ultimate aim with regard to multiple objective decision making
is to support the decision maker in finding a compromise solution which
reconciles the conflicting objectives in accordance with the decision
maker's preference system. We shall now indicate how a duality concept
may provide assistance to the decision maker in his search for an effi-
cient solution in view of which no other feasible solution is preferred.
A more detailed discussion of this point with several examples is found
in [5].
Everyone who is familiar with linear programming knows that an op-
timal simplex tableau yields both an optimal solution for the primal
problem and an optimal solution for the respective dual problem. Like-
wise, the multiple objective simplex tableau for the vector problem of
linear programming (5) yields in connection with each efficient basic
solution x O for (5) an efficient basic solution UO for (6) such that
Cx o = UOb. In order to verify this statement we shall consider the mul-
tiple objective simplex tableau for the primal problem (5). Let A = (A,I)
with I being an (m x m)-identity matrix, C = (C,O) with 0 being a
(k x m)-zero matrix. F denotes an em x m)-matrix of basic vectors, CF
denotes the ek x m)-matrix of criteria coefficients corresponding to

basic vectors in F, xF is the m-dimensional vector of the basic vari-

ables and x N is the (n - m)-dimensional vector of the nonbasic vari-
ables. The initial multiple objective simplex tableau can now be
written as
I b

-c o o.
With respect to F this multiple objective simplex tableau can be
transformed into


o. 0 -1 0
Let x wlth xF = F b, xN = 0 be a feasible basic solution for the pri-
mal vector problem (5), which is also dual feasible, i.e. for UO = CFF- 1
the system UOAw < Cw, w E Rn has no solution w. Then because of
~1 0 0
Cx o = CFx~ = CFF b = U b either solution is efficient for the re-
spective vector problem of linear programming and the multiple
objective simplex tableau could be written as


Let us depart from the multiple objective simplex tableau (11) in order
to see that the dual feasibility of a feasible solution for the primal
problem (5) immediately implies the efficiency of this solution. From
o n
U Aw < Cw for no w E Ro
follows immediately
-1 n
eFF b ~ ex for no x E Ro such that Ax b
and hence
exo < Cx for no x E R~ such that Ax = b.
Thus a sufficient criterion for a feasible basic solution x O for (5) to
be efficient is its dual feasibility, i.e. the associated matrix
o -1
U = CFF has to be feasible for (6).
o -1
Theorem 5: Let x be a feasible basic solution for (5). If (CFF A- C)w
~ 0 for no w E R~. then xO is efficient for (5).
Hence the multiple objective simplex method [1,13,6] can be seen as
an approach that seeks at first feasibility for the dual problem while
maintaining feasibility in the primal problem. As soon as feasible
solutions for both problems have been determined, an initial pair
(xo,Uo) of efficient solutions is at hand such that exo = UOb. The

further procedure of the multiple objective simplex method is then di-

rected to finding all feasible solutions for the primal problem while
maintaining feasibility in the dual problem.
Once an efficient basic solution has been identified the decision
maker may be interested in exploring efficient solutions which are
adjacent [5] to that efficient solution already found in order to
gather local information about the interdependence among the k con-
sidered linear objective functions and to decide on the basis of this
information whether another efficient solution is preferred to the
current solution. In the sequel we shall assume that the set of fea-
sible solutions for (5) is a bounded set.
Consider the multiple objective simplex tableau (11) with an effi-
cient basic solution for (5), xO, which is dual feasible for (6), and
the set of all nonbasic variables as a starting set of variables. This
set may be divided into two subsets:
(i) those nonbasic variables which, when introduced into the basis,
lead to an adjacent efficient basic solution - we shall call these
variables efficient variables;
(ii) those nonbasic variables which, when introduced into the basis,
lead to a nonefficient basic solution - we shall call these vari-
ables nonefficient variables.
A classification of the nonbasic variables into the categories of effi-
cient variables, each of which indicates the existence of an adjacent
efficient basic solution, and nonefficient variables is provided by the
following theorem [5].
Theorem 6: Let Xo be an efficient basic solution for (5) which is dual
feasible for (6) and UO = CFF- 1 . There exists at least a further effi-
cient basic solution for (5) which is dual feasible for (6) and ad-
jacent to x o , if and only if, there exists some index i of a nonbasic
vector a i of A, such that

(uo(ai,al,···,ai_l,ai+l,···,an) -
(ci,cl, ... ,ci_l,ci+l, ... ,cn))w ~ 0 for no w € R x Ro' (12)
In order to determine all indices i for which (12) holds, a se-
quence of linear programs with k constraints may be solved [6,15].
Applying Theorem 6 to the multiple objective simplex tableau of the
current efficient basic solution the decision maker will identify all
nonbasic variables that are efficient variables in the above sense and
thus lead to adjacent efficient basic solutions. Obviously, if x O is an
efficient basic solution for (5) which is dual feasible for (6), and if
(12) does not hold for each index i of a nonbasic vector a i of A, then

all nonbasic variables are nonefficient and XO is the unique efficient

solution for (5).
The classification of the nonbasic variables into efficient and
nonefficient variables also implies a classification of the associated
trade-off vearors ~i = c i - ~oai into effiaient and nonefficient trade-
off veators. Recall, that -C i can be read from the multiple objective
simplex tableau (11). ~i represents the change of the value zo = Cx o
due to increasing the nonbasic_ variable x.1 by one unit. If ~.1 is an
efficient trade-off vector, ci offers a direction of movement in the
objective space which leads from the value ZO of the present efficient
solution XO to a set of values of z to which efficient solutions can
be associated.
If the efficient variable xi associated to c i can be made a basic
variable, the trade-offs specified by ~. can be realized at a level
A~l. with 0 '$- A '$- A0, where A0 denotes t~e value of x.1 if x.1 becomes a
basic variable. However, the set of trade-off vectors associated with
the efficient variables xi does not provide an exhaustive list of all
efficient trade-off vectors which can be utilized at xo. A full in-
formation on this point is obtained with the aid of the following
theorem [5].
Theorem 7: Let XO be an efficient basic solution for (5) which is dual
feasible for (6) and UO = CFF- 1 . Let {i 1 , ... ,i p } be some index set of
nonbasia veatops a. , ... ,a. of A suah that
11 1p
CUo(a. , ... ,a. ) - (c. , •.• ,c. »w 1 +
11 1p 11 lp
CUo(a. , ... ,a.) - (c. • ••. ,c. »w 2 ~ 0 (13)
1p+1 1n 1p+1 1n
for no (w 1 ,w 2 ) E RP x R~-P.
Then eaah trade-off veator c whiah can be represented in the form

p - p
c = r a. C. (ra. =l;a. ~O(r=l, .•. ,p))
r=l 1r 1r r=l 1r 1r

is an effiaient trade-off veator.

A method which allows to determine all index sets (i 1 , ... ,ip ) which
satisfy (13) is described in [6].
Once the set of all efficient trade-off vectors has been obtained
with respect to the efficient solution x o , it is up to the decision
maker to select those directions in the objective space which are of
interest to him. Once these directions have been determined a series

of efficient solutions can be specified with respect to each direction

of interest in order to allow for the case that the unknown utility
function of the decision maker is nonlinear.


The equivalence - as stated in Theorem 4 - between Kornbluth's
homogeneous multiparametric vector problem of linear programming and
the matrix problem of linear programming analyzed by Gale, Kuhn and
Tucker may justify a brief discussion of the matrix problem of linear
Consider the matrix problem (1) for which the initial multiple
objective simplex tableau may be written as


-c o
with B being an (m x r)-matrix and ° °
below B being a (k x r) zero matrix.
With respect to F this multiple objective simplex tableau can be
transformed into


Let (xO,yO,Do) be a feasible basic solution for (1) with x~ F- 1By O

° = 0, D°
xN = CFF -1 B. Moreover, let U° = CFF -1 and U°Aw < Cw for no
wE R~. Then by the proof of Theorem 4 DO is maximal for (1) and
Cxo = CFF- 1By O = UOByo = DOyo.
Theorem 8: Let (xO,yO,Do) be a feasible basic solution fop (1). If
(C FF- 1A - C)w ~ 0 fop no w E R~, then DO = CFF- 1 B is maximal for (1).
The matrix problem dual to the matrix problem (1) which can be
gathered from the multiple objective simplex tableau (14) reads:
"min" {D I UAw ~ Cw, UBw::: Dw for no w E R~}. (15)
The relationship between problem (15) and problem (2) is stated in the
following theorem.
Theorem 9: Considep the problems (2) and (15). (Uo,Do) with DO = UOB
is an efficient solution fop (15), if and only if, there exists a
VO E R~ such that (uo,vo,Do) with UO = UoTvo is an efficient solution
fop (2).
The proof of Theorem 9 can be performed analogous to that of
Theorem 3. It is evident that decision-oriented implications of duality
dimilar to that of Section 3.1 apply to matrix problems of linear

programming as well.
If we want to determine all DO which are maximal for (1) we are
firstly interested to determine an initial D which is maximal for (1)
or to make certain that problem (1) has no efficient solution.
Theorem 10: The foLLowing statements are equivaLent:
(i) Both (1) and (2) have an effiaient soLution;
(ii) the biLinear program
min {g = uBy I uA - v C ~ 0 , y ~ 1, v ~ 1} (16)

The proof of Theorem 10 follows immediately from statement (ii) of

Theorem 2 and Theorem 3. An algorithm by which (16) can be solved is
found in [3]. Provided that (16) has an optimal solution (yO,uo,vo),
an initial efficient basic solution for (1) which is dual feasible for
(2) and (15) can be obtained by solving the linear program
max {voTCx I Ax = Byo, x E R~} (17)
as the multiple objective simplex tableau (14) specifying the initial
maximal DO is easily constructed from the simplex tableau which spe-
cifies the optimal solution x O for (17).
-1 -1
Let D' = CF,F' Band D" = CF"F" B be maximal for (1). D' and D"
are said to be adjacent, if F' and F" have m-1 basic vectors in common
and there eXlsts some yo r
E R+ such each (x ,y °,D ° ) with
that o
XO = ax' + (l-a)x~, DO = aD' + (l-a)D~ and 0 ~ a i l is efficient for
(1) •

Let J denote the index set of all Dj = CFj(F j )-lB which are maxi-
mal for (1). Then a multiparametric version of a multiple objective
simplex method [1,13,6] can be applied to determine all Dj (j E J), as
the following theorem obviously holds:
Theorem 11: Let E = {D j I j E J}, and L = {(Di,Dj) I Di and Dj are
adjaaent (i,j € J)} and G = (E,L) the undireated soLution graph assoai-
ated to (1). Then G is finite and aonneated.

1. Evans, J.P., and R.E. Steuer, A Revised Simplex Method for Linear
Multiple objective Programs, Mathematical Programming, Vol. 5,
54-72 (1973).
2. Gale, D., Kuhn, H.W., and A.W. Tucker, Linear Programming and the
Theory of Games, in: T.C. Koopmans (ed.), Activity Analysis of
Production and Allocation, 317-329, John Wiley & Sons, New
York, 1951.

3. Gallo, G., and A. Ulkucu, Bilinear Programming: An Exact Algorithm,

Mathematical Programming, Vol. 12, 173-19~ (1977).
~. Isermann, H., Proper Efficiency and the Linear Vector Maximum
Problem, Operations Research, Vol. 22, 189-191 (197~).
5. , The Relevance of Duality in Multiple Objective Linear Pro-
gramming, in: TIMS Studies in the Management Sciences 6, 2~1-262,
North-Holland Publishing Company, New York-Amsterdam, 1977.
6. , The Enumeration of the Set of All Efficient Solutions for
a Linear Multiple Objective Program, Operational Research Quarterly,
7. , On Some Relations between a Dual Pair of Multiple Objective
Linear Programs, Zeitschrift fur Operations Research, iforthcom-
8. Kornbluth, J.S.H., Duality, Indifference and Sensitivity Analysis
in Multiple Objective Linear Programming, Operational Research
Quarterly, Vol. ?5, 599-61~ (197~).
9. Mangasarian, O.L., Nonlinear Programming, McGraw-Hill, New York,
10. Radder, W., A generalized saddlepoint theory, European Journal of
Operational Research, Vol. 1, 55-59 (1977).
11. Schonfeld, K.P., Effizienz und Dualit~t in der Aktivit~tsanalyse,
Doctoral Dissertation, Free University of Berlin, 196~.
12. , Some Duality Theorems for the Non-Linear Vector Maximum
Problem, Unternehmensforschung, Vol. 1~, 51-63 (1970).
13. Yu, P.L., and M. Zeleny, The Set of All Nondominated Solutions in
Linear Cases and a Multicriteria Simplex Method, Journal of Mathe-
matical Analysis and Applications, Vol. ~9, ~30-~68 (1975).
1~. Zeleny, M., Linear Multiobjective Programming, Springer-Verlag,
Berlin-New York, 197~.
15. Zionts, S ., and J. Wallenius, Identifying Efficient Vectors: Some
Theory and Computational Results, Working Paper No. 257, State
University of New York at Buffalo, School of Management, November
Erik Johnsen
The Copenhagen School of Economics
and Business Administration
Copenhagen I Denmark

The present study was initiated as a consequence of critical condi-
tions for many small Danish firms.
The small firm in Denmark averages about 15 employees. It is normal
that the manager is the owner and the key person in the firm.
It is characteristic that the development of the firm and the develop-
ment of the owner/manager is parallel: the start period with the
creative and development-oriented manager, the growth period with
the adaptation-oriented manager and the stagnation period with the
control-oriented manager.
The crisis for the firm is a personal crisis for the owner/manager.
The environment has changed and demands changes in the structure and
function of the firm. But the manager has forgot then how to adapt
and develop, he has been trapped in administrative work for several
Multiobjective management in this situation consists of "help" to
develop the manager himself by establishing a search-learning process
that enables him to focus on the necessary and sufficient set of
objectives that he and his organization should try to attain in order
to survive and expand.
Based upon a general search-learning model some empirical models are
developed in firms. The multiobjective point of view is an important
element herein.

1. The small firm

The typical small firm in Denmark averages about 15 employees. Seldom

we are faced with more that about 35 employees, but of course one,
two or three employee firms are also found. But the typical small firm
has about 15 employees.

It is characteristic that the manager is the owner (in at least 85

per cent of all these enterprises).
As an average turnover is between one and two million Danish kroner
a year.
These numbers are based upon surplus tax registered firms except in
agriculture and a few other trades, 135,000.
95 per cent of these 135,000 firms are small in this sense, they
cover 30 per cent of a total turnover and employ 25 per cent of the
work force.
The small firm is a dominant business organization in Denmark. Never-
theless we have not been very much concerned about the managerial
process in this type of organization.

2. The manager/owner/key person

One reason that a man owns his own firm is that he want to control
his own activities. He wants to be himself, he wants to be relatively
independent. He also wants to make money and build capital. This is
at least what people say when questioned as to why they have their
own small firm instead of working for other organizations.
The owner is responsible according to the law. The firm's economy is
identical to the owner's personal economy.
Several requirements are put to the owner in his role as a manager.
One demand is that he should oversee the total system. That is he
must know about purchasing, inventory, production, sales, finance,
personnel, trade organizations, relations to public authorities, etc.
This functional overview should be stored in his brain.
It is normally also required that he must have technical insight in
the products, the production processes, and many of the specific
functions in the firm.
It is demanded that he should be a technician, a lawyer, an economist,
and an administrator,in spite that he normally has no specific know-
ledge of the normal functions in a firm, he is normally self-educated
and often an educational process based upon experience and upon a
formal education in handicraft.

It is also desirable that the manager has some views upon the future,
foresight. According to the stakeholder model he should know about
what in the future is desired by the employees, the customers, the
vendors, the sources of finance, the trade organizations, and the

local public milieu and the local politicians.

The owner/manager is considered to be the key person in the firm, so
to say the base of the firm.
If one looks upon the manager from a psychological point of view and
use a psychological model delineating motivational properties, emo-
tional properties, and cognitive properties he should not be in lack
of any and he should presumably be strong in the cognitive properties
of knowledge, especially technical knowledge, and his power motivation
(desire to be on his own) and maybe also his achivement motivation
should be relatively strong. We have found that a cognitive motive as
creativity seems to be of special importance.

If we look upon the key person as a "group-man", two groups in his

near milieu seems to be of importance. The one is his family and the
other is important employees in the shop. His family life with wife
and children is very much work-oriented and his relations to key
employees are of decisive importance to how the firm can be run and
It is quite clear that the owner/manager normally is aware of his
central position but it is also evident that he is not aware of play-
ing a proper managerial role, he is not conscious about management as
In exhibit 2.1 a rough model is made of a specific managerial
behaviour. Along the vertical axis are placed three types of manage-
rial objectives: the desire to control operations under relatively
stable conditions, the desire to adapt the firm's structure and
function to changes in the environment, and the desire to develop the
firm, i.e. influence the environment in such a way that the firm's
ability to survive will increase.
At the horizontal axis we have placed managerial means, i.e. general
methods for problem solving. In general we solve problems by analysis/
synthesis, human interaction, and by creating a search-learning pro-
cess. In this context we will not elaborate upon the arguments for
this classification.

In exhibit 2.1 we have put catch words to managerial roles according

to the proper combinations.

Managerial objectives

Development Philo- ment Strategist
sopher Sensor

Adaptation Resource Politician Creator of

Allocator milieu

Admini- ·visor", Seller of
Control strator consultant experimental
(decision process ideas
maker) confl ict

Analysis/ Inter- Search- means, pro-
Synthesis action learning blem sol ving

Exhibit 2.1: Managerial roles

It is characteristic for the key person in the small firm that he

solves his problems by "action" and not by any conscious problem
solving methodology.
He is normally strong in operative matters, which means that he is a
good administrator and maybe a proper decision maker.
He is to some extent conscious in tactical matters helped by his
registered accountant.
He is normally weak in strategical matters, he lacks ability to hire
and use consultants.
Of course this a stereotype model of managerial behaviour of the owner
of a small firm, but nevertheless we have experienced something in
this direction.
To express oneself in terms of exhibit 2.1 the key person is strong-
est in the left lower corner, but he is often also strong as an envi-
ronment sensor.
This is at least what we get as impression when we confront small
business owners with this classification.
In conclusion the central person plays a combination of a leader role,
a problem solver role, and a consultant role, but usually he is quite

unaware of these aspects of his daily life.

His managerial behaviour is very much controlled by environmental

stimuli and his personal experience and his basic practical education.

Lack of consciousness about a managerial role is one main reason when

the firm now and then approaches a crisis or a critical level of

3. The typical development of the firm and the role of the

The non-conscious management behaviour of a key person is also de-
lineated in the typical development of a firm.

Often the small firm is started by the owner when he is in the age
of late 20s or early 30s. He has a desire to become on his own, he is
willing to run a risk, he starts, he is a pioneer, he has an idea of
what he is going to make a living from. This is a state of development
which in principle requires philosophizing and planning, learning what
customers desire from the firm and its products, and a strategic
planning and -management. Usually all this is done in one stroke by
the starter: He has no formulation of the objectives, he has no stra-
tegy and his strengh is identical to his weakness.
If it runs, the firm will increase from one or two persons to several
hundred per cent more as far as employees are concerned. The firm is
in a situation of adaptation to the environment, especially the
customers' like of products and therefore the central person is faced
with a growth process. This growth is again based upon ongoing inno-
vation in products and markets according to spear head theory. In
this adaptation period the manager is using budgets, he produces
ideas for product development, he has good customers' relations and
he is aware of relations to workers' unions.
The third step is a consolidation period. Now the key person is ad-
ministering his firm, he is trying to control daily operations and
he is trying to balance inducements and contributions from the stake
holders. He is burried in administration, he is interested in the
physical production processes (the logistics in general). He is also
aware of the demand for effectiveness, he is aware of the social-
emotional relations to his employees, and he is aware that he must
motivate his people.
In this state of a firm it runs a risk to face a crisis because the
key person has forgot how to "start" again and he has forgot the hard

process of adaptation. In the lucky cases a crisis will not occur but
the key person will feel problems which we according to exhibit 2.1
can describe as control problems, adaptation problems, and development
Stated in multiobjective language the firm represented by the key per-
son will desire to attain three sets of objectives: operational con-
trol problems in stable environments, adaptation objectives according
to changes in the requirements from the environments, and development
objectives defined as the desire to change the environments in such a
way that the firm can function better in its surroundings, is better
shaped for its strategic function in society.
In order to describe the various problems we select a model of the
firm acoording to which the problems are described as "internal". It
is simply a man-machine-management systems model.

Re control
The problems in the man side of the man-machine-management system is
social-emotional relations, awareness of effectiveness and personal
motivation. A small firm should be looked upon and function as a nor-
mal human group or maybe a few basic groups and if they do not func-
tion according to what we know about a group life it causes problems.
On the machine side of the system we are faced with ongoing adjust-
ments, we are faced with good or bad supervisorship by the key person,
we are faced with profit questions in relation to product, customers,
and capital allocation. We are faced with the normal internal infor-
mation and communication problems, co-operation with other firms, and
stake holders, and we are faced with ongoing investment problems.
The key problem on the management side is non-awareness of the manage-
rial role. The key person will often experience his own managerial
problems as simple marketing problems.

Re adaptation

The main and overwhelming problem in the man side of the system is as
far as adaptation is concerned the generation shift. The tax laws
makes it difficult to transfer ownership in a way that does not re-
quire a lot of liquid capital.
On the machine side the problem is that state and local authorities
pass new laws and regulations based upon what politicians think is
useful primarily for the workers and for people living in the local

community. This happens to be expensive for the ,small firm. Examples

are environmental requirements according to a new law on environment
protection, and of course changes in the requirement for protection
on the job also causes demands for structural changes in the firm.
Else the problems are caused by the technological development and is
experienced as "we are not able to catch up with development", which
means no conscious product development but new products are based
upon sporadic ideas and eventually co-operation with various vendors
and industrial customers in the function as sub-contractors.
On the management side the small firm is especially vulnerable in
relation to workers' unions, in relation to overcapacity created by
new technics and competition, and vulnerable because of no organized
information for adaptation purposes. The key person usually talks
with collegues who know as little of what is going on as he does him-

Re development
Development problems are primarily problems related to the managerial
side of the man-machine-management system. It is a question of strate-
gy: what is our business and what should it be and what are our stra-
tegic objectives, what are our strategic means and what is the envi-
ronment in which we are going to attain our strategic objectives given
our conception of our strategic means. Usually the key person is not
at all able to answer these questions and he is not able by his own
ideas to conduct neither a qualitative strategic analysis nor a quan-
titative strategic planning.
The personal strength and weakness of the key person is of decisive
importance for the development. When he started he was strong, as he
becomes older we are often faced with the fact that he is not apt to
start over again. It is considered too expensive to hire "the right
man" as a companion. Therefore a cheaper "mirror" of the owner is
eventually hired.
Awareness of the manager role is important for managing developments
as far as the managerial role is defined as a goal directed interaction
with other people.
Another difficulty for the development is time and capacity in terms
of resources and motivation. Finally we have found that it can be
difficult for a person to make decisions alone and all the time be
faced with responsibility for every main movement taken by the firm.

These types of problems may - of course - also be found in bigger

firms, but, nevertheless, talking to small business people and work-
ing with them this list happens to appear as a sort of a check list
for typical small business problems.
The question is now whether the problem solving process is characte-
rized by other aspects in the small firm as we usually consider proper
means according to a problem solving theory.

5. Problem solving in the small firm

Management of a firm causes an ongoing sequence of problems which
should be solved in order to proceed in a goal oriented direction.
According to exhibit 2.1 we have classified general problem solving
in terms of analysing/synthesizing activity, human interactive be-
haviour, and search-learning processes.
Of course the key person in the small firm analyses and interacts all
the time, but his problem is that he does not have the skill to inter-
act with all sorts of people and he does not have the skill to analyse
and synthesize in every aspect of his firm's activity. Therefore he
needs consultant help.
The main problem for the small business manager as far as ongoing
problem solving is concerned is that he lacks ability to use consul-
tants. His experience with external consultancy is bad, they are too
expensive, they are too inefficient, they are too wise, they are not
able to communicate with him and his people. He has only confidence
in his registered accountant and the accountant's ability as a con-
sultant is - as we know - limited.
Awareness on the consultant role is, therefore, a central aspect in
improving problem solving in the small firm.
This also requires another attitude to problem solving which could
best be expressed as converting problems from one sort of problems
to an ongoing search-learning process.
In exhibit 5.1 we have tried to define a search-learning process in
terms of individual search and learning and collective search and

managelent model mOdels1'chan ge

analysis- synthesis-
learning process
process real system ~ systems change
JI' ~
consultants model models change

r action ~ group(s) ~

search indi- .
vidual enV1ronment

~ ~

world conception

individual collective

Exhibit 5.1: A search-learning pr-ocess

To put it brief, individual search results in a sort of world con-

ception, individual learning is a new way to use analysis and synthe-
sis in a mutual process. Collective search is use of other people,
hereunder his consultants in a search process and collective learning
means that the firm is used as an ongoing experiment, included formal
management of a firm, its consultants and its key persons.
We have elaborated on this search-learning model in exhibit 5.2.
Based upon exhibit 5.2 we can construct a manager profile in terms
of search-learning as it is done in exhibit 5.3.
In a study we have tried to characterize the small manager in these
terms and we have as control tried to delineate a sort of average
profile for managers in big firms.
It is significant for the small manager that his individual search
is static and simplified, he learns by intuition, his collective
search conserves his way of behaviour and collective learning is
sporadic, he makes changes when the climate becomes too hot.
As compared with this type of search-learning the big boy is more
Our hypothesis is now that the behaviour of management in the small
firm can be made just as vivid by a conscious use of a consultant
function. This means that the small manager must be aware of how he
should be a consultant to his own people and how he should be a con-

~~~ Individual learning ::.

your worid conception interection !l.nalysis - synthesis

reaction synthesis

static dynamic creative adeptlve Imovative

usable realistic Internally loarnlng learning
dam. (& Intuition)
static dynamic
simplified simplified demand Indoc- cumulative
extern trlnatlon learning
dom. C"knowledge")
stereo- varying
extensive intensive

coUect ive search ::> relation between man. group.

consultant group and outer groups

consultant group outer group(s)

group-dim. ext. new reletlve pseudo explorative

OK dominated information unknown search search
relative sporadic menipulatlv9
non- random int. dome well known search search
exist. Information conserving
group-dim. informat ion
man. random new
r.. latlons
group Information Information
non-ox. group-dim.
group-dim. OK

collective learnlng~ relation bet\N8'en management system/real system '"'

stimulus for chanoe

model matching
promtness of reaction

stirn. ch. change 0.1 big sporadic fast,

management total system change 01 power -based
or Incl. menage...,.,nt "hot" parts chenges
consultant system
small Ineffective planned
no stirn. stim. for management changes
+ lor~ change of of changes
management reaction
model/system. force
matchirlQ email big

Exhibit 5.2. Search-learning processes

("static static dynamic dynamic
m individual search
x simpl ified usable simpl ified real istic
0" ?
Ul indoctri- adaptive cumulative inno~ative
individual learning
~ nation lear.ning~ learning~ learning
( intG'rtion) ("knowledge") ?
Irandom I external dom.l'+Rternal dom. newl(add.)
information information . <. ". 1 .. <.1

} collective se",eh
sporadic pseudo
search searcn search ... search
p" ill
(0 stimulation change/of. / / , change of I change of I }
for changes man <OAr,," man. total system
conlqu f'~", "' II t·
co ec Ive learning
I ineffective spor, dic ' ' - ) planned fast, power- ~
man. of charlge of '6Qange based change
change "hO\!; parts "0

small firm manager-profi Ie

big firm manager-profile


sultant for external consultance in such a way that they would be

able to help him and his people in getting along.

If we consider management to be an ongoing process in which the mana-

gerial role is combined with the problem solving role and the consul-
t~nt role we see no principal difference between management problems
in the small and the big firm. The manager/key person/owner in the
small firm should simply become aware of these aspects of behaviour
and live them out in his own way in his own milieu.

Improvement of management in small business then simply requires an

educational programme which stresses these three aspects and which is
organized in such a way that the small manager and his firm so to say
sit in school class together with other small firms, a school class
which is conducted by a "teacher" who is able and willing to learn
about small business problems and improve his own learning process by
participation in management of the small firms partly as manager,
partly as problem solver and partly as consultant.

Preliminary results from sporadic experience with open business

schools look promising.

6. Conclusion on multiobjective management of a small firm

We have defined the managerial role as a goal directed interaction
with other people in terms of operational control, adaptation, and

These three sets of objectives should be kept in mind all the time
and should be attained simultaneously or maybe in a sequence in which
a different attention is payed to the various objectives. It could
be, for example, that the development objective requires more atten-
tion in a certain period than for example the adaptation objective.
It is characteristic that every owner/manager formulates his own
objectives when he is exposed to the question as to whether he has
objectives or not and when he is asked to formulate objectives.

Usually these objectives can be described as minimum requirements

and/or maximum tolerances, keeping within a set of "distances",
reaching one specific point or one specific vector or keeping out
some area.

We shall not in this context elaborate on the wide variety of concrete

managerial objectives in small firms, they are so to say condensed
in the three formulations: operational control, adaptation, and

It is our experience that this brief formulation provides a stimulus

good enough for starting a conscious managerial process in the small
firm and it is also our experience that it is a proper first step
to take when one wants to help a small firm to improve its managerial
process in terms of a search-learning process.
The explicitly formulated set of objectives at the local firm is
necessary in order to learn as it provides the basis for observations
as to whether one is at the right track, i.e. the track desired, or
as to whether at a side track or maybe not at the track at all.
Finally, it is characteristic also for the small firm that the objec-
tive perception is multidimentional and not unidimentional. When it
comes to the point "survival" and "making money and fortune", it is
expressed in a very personal way for the key person/manager/owner
and his sway group: his family.
In conclusion: the roads to goal attainment in the small firm seems
to be sporadic, unsystematic, random, and stereotype. The key person
and his sway group is in lack of time to manage, is in lack of know-
ledge and consciousness about management and the key person is in
lack of a sort of social (emotional) security, the responsibility
is a one person thing and not a group phenomenon.
In development of his own managerial skill a small manager could
set up objectives of overcoming these obstacles, thereby learning
how to work with objectives of operational control, adaptation, and


Donald L. Keefer

Gulf Management Sciences Group

Gulf Oil Corporation, Pittsburgh, PA

This paper describes a decision analysis approach to resource

allocation planning problems having uncertainties, multiple competing

objectives, organizational constraints, and continuous decision vari-

ables. Using decision analysis concepts, the resource allocation

planning problem is formulated as a nonlinear programming problem, the

objective function of which is the expectation of a multiattribute

utility function. It is shown that if a set of independence con-

ditions holds, the function can be decomposed into a relatively simple

form. This decomposition, together with the use of appropriate approx-

imations, significantly facilitates data acquisition in practice. The

application of this approach to two industrial planning problems is

described, and the reaction of management is discussed.

*Adapted from the author's Ph.D. dissertation in the Department of

Industrial and Operations Engineering of The University of Michigan,
where it was supervised by Professors Craig W. Kirkwood and Stephen M.
Pollock. Financial support for this work was provided by Whirlpool
Corporation and The Horace H. Rackham School of Graduate Studies.

1. Introduction

This paper presents an analytical approach to resource allocation

planning problems having multiple objectives and briefly describes its

application in two industrial budget allocation problems. A resource

allocation planning problem is an aggregate (i .e., macro or "big-

picture" oriented) planning problem in which scarce resources must be

allocated among competing activities. In such problems, management

often faces multiple competing objectives, especially if the group is

functionally oriented. For instance, the director of an R&D labora-

tory may want to strive for maximum performance in each of several

areas of responsibility such as new product research and technical

support. Frequently, performance measures with respect to some or all

of these objectives have not been quantified. Furthermore, even if

the contribution to corporate profitability is of primary importance,

it may be extremely difficult to express performance in terms of

dollars, since the relationship to corporate profits may be indirect

and confounded by many other factors. Significant uncertainties are

also usually present: i.e., the group's performance as a function of

the allocation decisions is not predictable with certainty. Finally,

there are generally organizational constraints which restrict the set

of feasible allocation decisions. In the case of an R&D director, for

example, political realities within the organization or the nature of

the work force may prohibit a major change in the budget allocation

for basic research -- at least over the planning period in question.

The approach presented here uses multiobjective decision analysis

to deal with the uncertainties and multiple objectives. Management's

relative preferences for multidimensional outcomes are quantified via


a multiattribute utility function, the expectation of which becomes

the objective function to be maximized. This objective function, along

with the organizational constraints, is expressed as a function of the

allocation decision variables, and a nonlinear programming formulation

results. The basic purpose of this paper is to describe a pragmati-

cally oriented approach to such problems based on the above concepts

and to illustrate its applicability in industry.

The next section briefly discusses previous work related to

decision making with uncertainties and multiple objectives. Section 3

presents the analytical approach employed here and the resulting

mathematical model. Section 4 outlines data acquisition and model

construction procedures, and Section 5 discusses characteristics of

solutions to the mathematical programming problem. Section 6 briefly

describes the two industrial applications. This presentation is

relatively compact; a more detailed account of this work is presented

in [12].

2. Related Previous Work

Management decision problems in general often involve uncertain-

ties, multiple competing objectives, and organizational constraints.

The literature from organization theory [3] points this out, as does

the management science literature [1,2]. However, most currently

available quantitative methods are not designed to deal with uncertain-

ties and multiple competing objectives simultaneously. Moreover, most

of the reported methods that do take multiple objectives into account,

with or without accompanying uncertainties, deal with tradeoffs among

these objectives in an ad hoc, approximate manner [2,11,20]. The form

of the functions used to represent the multiple objectives is often

postulated for convenience.

The approach described here proceeds more formally by verifying

conditions on the decision maker's preferences to establish the form

of the multiattribute utility function used in the subsequent analysis.

A number of other recent studies have also proceeded in this manner: e.~,

references [5,8,16,18,21]. The present work differs from these studies

in focusing on resource allocation planning problems, which generally

have continuous decision variables. This leads to a number of data

acquisition and model construction issues not encountered when only a

relatively small number of discrete alternatives are of interest. In

addition, while most previous applications of multiattribute utility

theory have been in the public sector, this work was motivated

primarily by industrial problems.

3. Analytical Approach

The approach presented here used concepts, techniques, and results

from decision analysis, particularly from multiattribute utility

theory. These have been described in detail elsewhere both in general

[10,19,25,27] and for specific applications [8,12,16,18]. Consequently,

their discussion here will be brief.

In what follows, we consider only the case in which a single

resource -- e.g., the operating budget -- is to be allocated among

competing activities. Conceptually at least, the approach can also be

used when multiple resources are involved, although data acquisition

and model construction would become more complex.


3.1 Mathematical Programming Formulation

With this approach a set of attributes, or measures of effective-

ness, R = {Rl, R2, ... , Rn} is defined to measure the degree to which

the different objectives are met by a specific outcome r =

{r l , r 2 , ... , r n }. Hence r represents a specific value of R. Each

objective will in general have one or more attributes associated with

it, and the attributes may be quite dissimilar: e.g., dollars of

profit and product quality rating along a "constructed," or "subjective,"

scale. If certain "axioms of rational choice" are accepted (and most

people do find them acceptable as a basis for normative decision

making), then the feasible decision vector, or allocation policy, x

should be chosen that maximizes the expected utility:

E[u(rix)] = u(r)f(rix)dr, ( 1)

where f is the probability density function over R given a specific

policy x, and u is the multiattribute utility function. Thus, E[u(rix)]

serves as an objective function that quantifies management's preferences

for outcomes, including those involving uncertainties and tradeoffs

among the attributes.

As stated earlier, there may be restrictions on the values of the

decision variables. One type of restriction consists of upper and

lower bounds on the individual decision variables: i.e.,

a < x < b, (2)

where a and b are vectors of lower and upper bounds, respectively. In

addition, there may be functional constraints involving several of the

decision variables simultaneously:

gk(x) = 0, k = 1, 2, ... , Ll' (3)


gk(x) ~ 0, k = Ll + 1, Ll + 2, ... , L2. (4 )

Equations (2), (3), and (4) define the feasible region, and

equation (1) defines the objective function in a nonlinear programming

problem. Thus, the resource allocation planning problem is to find

the policy x* satisfying restrictions (2), (3), and (4) that maximizes

the expected utility given by (1).

3.2 Independence Assumptions

The above formulation can only be put to use if the functions it

contains can actually be obtained. Assessing the multiattribute

utility function u(r) and the conditional joint density function f(rlx)

is greatly simplified if certain independence conditions hold.

In many resource allocation planning problems, some form of con-

ditional probabilistic independence assumption is appropriate. For

instance, performance along individual attributes may be independent of

performance along the other attributes, given a specified resource

allocation x. Furthermore, the marginal density function fi (ri Ix) may

depend upon only one, or perhaps several, of the xi's rather than on

all of them. In an R&D environment, for example, performance in the

area of new product research may depend (probabilistically) only upon

the budget allocation for new product research. In fact, such inde-

pendence considerations are often taken into account in defining the

various allocation areas, or activities, in the first place. To express

this independence condition mathematically, let xi denote the set of

decision variables upon which fi(r i I.) is conditionally dependent.

Then the joint density function can be written as


f(rlxl = fl(rllxl)f2(r2Ix2) ... fn(rnlxn). (5 )


Operationally, this assumption decomposes the task of assessing the

joint density function {nto that of assessing n marginal density func-

tions each of which is conditional only on its own subset of the

decision variables.

Assessment of the multiattribute utility function u(r) can also be

greatly simplified if certain independence conditions hold. In dis-

cussing these independence conditions, it is convenient to use the

following notation:

Rij = {R I , R2 , ••• , Ri_l' Ri + 1 , ... , Rj _1 , Rj +1 , ... , RnL

Then Ri is utility independent of Ri if preferences for risky choices

(lotteries) over R. with the value of

R.1 held fixed do not depend on

the fixed value of Ri . The set {R i , Rj } is preferentially independent

of R .. if preferences for consequences differing only in the values of


Ri and Rj do not depend on the fixed value of Rij . Experience has

shown that these assumptions are often appropriate in practice, pro-

vided that the attributes are chosen judiciously -- e.g., see the dis-

cuss ion in Keeney and Raiffa [191. In particular, the following theorem

due to Keeney [171 was found applicable to the industrial problems of

Section 6 and is used in what follows:

Theorem 1. Let RI , R2 , ••• , Rn be the attributes of a decision pro-

blem with n > 3. If, for some Ri , {R i , Rj } is preferentially indepen-

den t 0 f R.. for a 11 j t i, and R. i s uti 1 i ty i nde pen den t 0 f R., the n
1J 1 1

u ( r) L kiui(ri) (6 a)
i =1
or n
1 + Ku(r) = n [1 + Kkiui(r i )), (6b)
i =1

where u and the u. 's are utility functions scaled from zero to one,

the ki's are scaling constants with 0 < ki < 1, and K > -1 is a nonzero

scaling constant.

The values of the k. 's can be used to determine whether the


additive (6a) or multiplicative (6b) form of the multiattribute utility

function is appropriate and, in the latter case, the value of K. In

theory, if L k.
i =1 1
= 1, then the additive form holds. In practice, if

this sum is close to one, more direct methods of determining the

appropriateness of the additive form may be used [17,19].

When t6a) or (6b) holds along with (5), the resource allocation

planning problem can be stated as

maximize E[u(rix)] (7)


subject to (2), (3), and (4),

where E[u(rix)] is given by either

n + A

E[utr x)] = [k u (x )
i=1 i i i


and u.+ (x.)


= 1
u. (r. )f. tr.i~. )dr.,
1 1 1 1 1 1
1, 2, ... , n.

The function ui(x i ) defined above will be called the ith one-

dimensional expected utility function, abbreviated by odeuf, in what

follows. These n functions, together with the scaling constants, com-

pletely define the objective function in terms of the decision varia-

bles. Notice that the information required for each odeuf is independ-

ent of the other attributes; consequently, each can be constructed

independently. Data acquisition for, and construction of, the odeufs

is discussed in the next section.


The procedure developed by Keeney can be used to check the validity

of the conditions of Theorem 1 [12,16,17,19]. In the present context,

the important thing to note is that the procedure is operational --

i.e., the required questions can be cast in a form that is meaningful

to management.

4. Data Acquisition and Function Approximation

In theory, each marginal probability distribution fi(r i I~i)

appearing in (5) must be known for each value of xi. Although in some

applications it may be possible to construct a stochastic process

model to generate these distributions or to use available statistical

data, in many others the use of judgmentally assessed distributions

will be appropriate. Here we shall confine our attention to situations

in which the probability data is assessed judgmentally and each of the

subsets x., i = 1, 2, ... n, contains only one decision variable, as


is the case in the two budget allocation planning applications in

Section 6. Thus, the number of decisions variables will be less than

or equal to the number of attributes. In such cases, some sort of
+ ,
interpolation scheme is needed to define ui(x i ) for values of xi at

which f.(r.
1 1
is not assessed. The nature of the interpolation

scheme, the number of values of xi at which fi(r i Ix i ) should be

assessed, and the type and amount of probability data to be assessed

at these x.1 values must be specified. The reader interested in a

general discussion of these issues in the context of resource alloca-

tion planning problems is referred to [12~. A short discussion is

given below.

4.1 Probability Data

In considering the amount of probability data to assess for

fi (r i IXi) at a specific value of xi' there is an obvious tradeoff

between the increased accuracy obtained from additional data and the

time and effort required to obtain it. Perry and Greig [24] suggest

that an essentially distribution-free three-fractile approximation for

the mean that was developed empirically by Pearson and Tukey [23] can

be useful for calculating expected utilities. In the context of

resource allocation planning problems, Keefer [12] shows that this

approximation for E[ui(rilxi)] is an excellent one even when fi(rilxi)

is significantly skewed and ui(r i ) is quite nonlinear. The approxima-

tion to E[ui(rilxi)], denoted by ¢i(X i )' can be written as follows:

¢i(xi) = .63ui(ri(·501~i)) + .185[ui(ri(·951~i))

( 8)
- 1 '
Fi (alxi) represents the a fractile of the cumulative

distribution at xi' Fi(rilxi). The accuracy of this approximation is

sufficient for any set of data likely to arise in practical resource

allocation planning applications; in many cases, even the utility of

the median provides an adequate approximation.

Thus, it is reasonable to obtain three fractiles per assessed dis-

tribution: namely, the .05, 50, and .95 fractiles. These fractiles

can be elicited from the decision maker (or his designated expert)

using standard decision analysis techniques which incorporate various

consistency checks [19,25,27,28].

The question remains as to how many distributions should be

assessed for each attribute, at what values of xi they should be

assessed, and what sort of interpolation should be used for the fractiles

at values of xi where assessments are not made. After examining a

class of polynomial interpolation formulas and various possible

spacings for the assessments, Keefer [12] suggests using quadratic

interpolation through fractiles assessed at each of the two extremes

of xi and at its mid-range. Upon adopting this recommendation, there

are nine fractiles to be assessed for each attribute: the .05, .50,

and .95 fractiles at ai' (a i + 6i )/2, and 6i .

4.2 Utility Data

Given that the assumptions of Theorem 1 hold, the required pre-

ference information is contained in the conditional utility functions

ui(ri) and the scaling constants k i . The conditional utility functions

ui(ri) can be assessed using the standard lottery technique together

with appropriate consistency checks [19,25,27]. Depending upon the

characteristics of the curves obtained, it may be convenient to approx-

imate them with mathematical functions. However, this is not necessary

if the approximation procedure suggested in Section 4.3 is used.

The n scaling constants can also be assessed using methods des-

cribed in the literature [16,17,19]. Since the questions required to

do so are a bit more complex than those required to obtain the ui(ri)'s,

consisting checks [19] are particularly recommended here.

4.3 Odeuf Construction

have been assessed at xi = ~i' xi = (ai + 6i )/2, and xi = 6i for quad-

ratic interpolation and that ui(r i ) has been obtained, then it is

straightforward to construct the one-dimensional expected utility
+ A

function ui(xi). The expression for the a fractile becomes


1 1

+ [4h - 4h 2 ]r.(al(a.
1 1
+ b.)/2)
+ [-h +2h 2 ]r 1.(alb 1. ) , (9)

where 0 < h(xi) =' (xi - ai)/(bi - ail .::. l.

Equation (9) can be used to obtain ri(.95Ixi)' ri(.50Ixi)' and

ri(.05Ixi) for any xi such that di < xi .::. b i , and these results can be

used in equation (8) to obtain ~(x~) as an approximation of u~(xil.

The value of each odeuf ut(xi) can be obtained in this fashion for each

point x of interest in solving the optimization problem (7).

While the above method of obtaining ui(xi) is not difficult to

implement, it does require an analytical expression for ui(r i ) in

order to be used conveniently. The successive segmentation procedure

suggested in reference [12) avoids this requirement by interpolating

over the expected utilities (or approximations thereto such as ¢i(~i))

rather than over the probability fractiles. In this procedure, ex-

pected utilities are obtained from equation (8) at ai, (di + 6 i )/2,

and bi prior to any interpolation or optimization, and equation (9) is

then used to interpolate over these values of ¢i (xi) rather than over

the fractiles in order to approximate ui(xi)' Since this procedure is

an approximation to the more straightforward method, the deviation

between the two at intermediate test points -- e.g., at xi = (3ai + b i )/4

and at xi = (d i + 36 i )/4 -- should be checked. If the two are not in

satisfactory agreement, the original intervals can be divided into two

the procedures can be compared again over each of these two segments,

with the interpolated probability fractiles at the midpoints of each

new segment treated as if they were assessed data. Very few if any of

these segmentations are usually required to obtain a convenient

expression for u:(x.), since the conditions under which the two inter-
1 1

polation schemes agree are seldom violated by a great deal in practice.

Of the ten attributes in the two applications of Section 6, for

example, one subdivision of the original range was necessary for two

of the attributes, while no subdivisions at all were required for the

other eight. Obviously, many other types of interpolation schemes can

be devised for approximating the odeufs in specific situations.

5. Solution Characteristics

A variety of nonlinear programming algorithms [7,9J are available

f~r solving the optimization problem (7). The optimization required

for the applications in Section 6 was readily accomplished using an

exterior penalty function method similar to that described in

references [6] and [14]. Each of these applications had relatively

simple feasible regions defined by the bounds on the decision variables

(2) and a single equality constraint (3). More complicated constraint

structures could, of course, make the solution of (7) more difficult.

5.1 Local Optima

Since the formulation (7) does not in general result in a convex

programming problem, there may be multiple local optima. With the

multiplicative form (6b) of the utility function, mUltiple local

optima may exist even if each of the odeufs ut(x i ) is concave and if

the constraints form a convex region; thus it is difficult to deter-

mine beforehand when the multiplicative formulation will have a

unique maximum. In practice, such local optima can usually be found

by initiating the optimization procedure from a number of different


starting points. While the existence of local optima is a nuisance

for the analyst, it is not necessarily a detrimental feature in appli-

cations: it may merely imply that several very attractive policies

(or policy directions) can be identified rather than just one. From a

policy implementation standpoint this provides greater flexibility,

which can be a definite asset in attempting to get a consensus within

an organization.

5.2 Sensitivity to Structural Assumptions

It is interesting to note the implications of assuming simpler

forms for the objective function in (7) than are actually appropriate

in a given application. For instance, if the constraints are linear

as in the two applications in Section 6, a linear programming formula-

tion results from assuming (i) the multiattribute utility function is

add i t i ve as i n (6 a), ( i i) e a c h con d i t ion a 1 uti 1 i ty fun c t ion u. (r .) i s

1 1

linear, and (iii) the fractiles of each of the marginal probability

distributions Fi(rilxi) vary linearly with xi. A linear programming

formulation does eliminate the need to worry about local optima,

except in the case where there are an infinite number of local optima

-- see, for example, [22]. However, this formulation also requires

that the optimum (if unique) lie at an extreme point (vertex) of the

feasible region. But in many cases the feasible region surrounds the

existing policy, which is therefore not an extreme point. Thus the

linear programming formulation may exclude management's existing policy

from the set of possible model solutions by its very structure, regard-

less of the data assessed subsequently -- certainly not a good selling

point for a model.


If only assumption (i) from above (additive utility function) is

made, optimal solutions are no longer limited to extreme points. More-

over, the problem may still be easier to analyze and solve than if the

utility function has the multiplicative form (6b). For instance, if

each of the odeufs is concave, then the objective function in (7) is

concave, and a unique maximum exists if the feasible region is concave.

However, the preference structure of the additive form cannot accommo-

date interactions among levels of the attributes; i.e., it is multi-

attribute risk neutral in Richard's terminology [26]. This implies

indifference between lotteries having the same marginal probability

distributions regardless of any differences between their joint

distributions. With the multiplicative form, the attributes can sub-

stitute for each other (K < 0) or complement each other (K > 0) -- see

Keeney and Raiffa [19]. Experience indicates that these sorts of

interactions are often present to a substantial degree in utility func-

tions elicited from actual decision makers.

Using simple two-dimensional problems, Keefer [12] has studied the

sensitivity of the optimal solution of (7) to the input data as well

as its sensitivity to assuming an additive or linear additive objective

function when the multiplicative form is actually correct. His basic

conclusion is that the sensitivity to data changes of reasonable magni-

tude is low to moderate, but that changes in the structural assumptions

can lead to dramatic errors in some cases. This conclusion is of

particular interest since sensitivity analyses have typically been done

on parameters within models and not as is suggested by this result on

structural assumptions inherent in the models. Thus the form of the

objective function in problems of this type may be more critical than


some authors suggest it is in other contexts -- see, for example,

Edwards (4).

6. Industrial Applications

This section presents very brief descriptions of the application

of this methodology to two annual budget allocation planning problems

within a major corporation -- one within a product engineering group

and the other within an R&D division. The two studies are described

more completely in [15) and [13), respectively, while [12) provides

much more detailed descriptions of both studies, including the basic


6.1 Product Engineering Study

This product engineering department has engineering design respon-

sibility for several major product lines involving the same general

product type. Engineering effort is concentrated in the following

basic areas:

(1) Cost improvement; reducing the cost of the product.

(2) Quality: preventing and responding to field

incidence of product failure.

(3) New features and models: developing new features

for existing product models, periodically revising

the major model lines, and responding to requests

for special limited edition models.

The annual planning problem is to allocate the operating budget among

the three areas in order to do "as well as possible" in each. In

doing so, competing objectives, uncertainties concerning the actual

performance that would result from various allocations, and


organizational pressures from other groups in the corporation are im-

portant considerations. Due to these complexities, the allocation had

always been made by informal intuitive means.

The analysis used the methodology presented earlier. The depart-

ment director served as the decision maker and provided the necessary

judgmental information. The decision variables were the fractions of

the departmental operating budget to be allocated to the three areas

during the planning year. The only constraint was that these fractions

sum to one. Upper and lower bounds were provided for each of the

decision variables.

Four attributes were defined for this problem -- two in the cost
improvement area and one each in the quality and new features and

models area. The cost improvement attributes were expressed in terms

of dollars/unit and were designed to measure the current (same year)

and future impacts, respectively, of current budget allocations to the

cost improvement area. The other two attributes were "constructed,"

or "subjective," scales which ranged between 0 and 4 numerically. It

is worth noting that while some analysts understandably question the

usefulness of such scales in practice, in this instance they were

meaningful to the director and seemed to serve their purpose adequately.

Moreover, it is questionable as to whether the cost improvement scales,

which are necessarily based on somewhat arbitrary assumptions, are

really substantially less "soft" than the constructed scales.

The initial analysis was conducted with data for a "typical or

normal" year and indicated a significant shift in allocation policy

from what had been used in such years. The director regarded this

result as an important contribution to his planning insight. At his


request, an analysis was done using revised data that more nearly

represented his current (atypical) business environment. These results

tended to support his current allocation policy, which reinforced his

intuition that a relatively recent shift in policy had been made in

the proper direction.

Inshort, the director was well-satisfied by the analysis. He felt

it had captured the essence of the problem; the assumptions were

reasonable and the data sufficiently accurate to be useful. It was

at least partially as a result of the success of this analysis that the

Research and Engineering planning study described below was requested

within the same corporation.

6.2 Research and Engineering Division Study

The Research and Engineering (R&E) Division primarily works to-

wards developing new or improved products and manufacturing processes

for the parent corporation. A limited amount of more fundamental

research is also done, and technical consulting is provided. For 1976,

the activities of R&E were classified into the following six "missions":

(Il current product R&D, (2) manufacturing process R&D, (3) new

business opportunities, (4) technological research, (5) continuing

(technical) support, and (6) other (good corporate citizenship and

people-oriented goals).

R&E is essentially funded by corporate money, and R&E management

has the basic responsibility for allocating scarce resources among

these missions. The "annual plan" for R&E budget allocation is used by

departmental management within R&E as a guide in determining their mix

of projects and thus plays an important role in resource allocation.

Development of this plan is a dynamic, iterative process involving a

number of management and staff personnel who work together to arrive

at a consensus. The lack of "hard" performance measures, together with

the difficulties associated with handling uncertainties and tradeoffs

among the objectives of the various missions, had always precluded

explicit quantitative consideration of output (performance) variables

and their relationships to the budget allocations and to each other.

Two directors representing the R&E management staff served as

decision makers in the analysis of how the 1976 oeprating budget of

R&E should be allocated among the six missions. The primary purpose

of the study was to evaluate the usefulness of the proposed methodology

in R&E planning activities, since the 1976 planning activity was

already well underway. In practice, no difficulty was created by

having two decision makers, since there was no significant unresolved

disagreement between them. All assumptions were verified and all data

were obtained in joint meetings requiring a total of about ten hours

for the entire study.

As was the case in the product engineering study, the decision

variables were the budget fractions to be allocated to the six missions,

or areas of responsibility. Again, the constraint that the sum of

these fractions must equal one, together with the upper and lower

bounds on the budget fractions, defined the feasible region. Con-

structed (subjective) scales were carefully defined in discussions

with the two directors to measure performance in each mission; thus,

a total of six attributes were defined -- one for each mission.

The results of the analysis again were of sUbstantial interest to

management. The directors, of course, were interested in the


macroscopic aspects of the solution, not in fine-tuning adjustments.

While they were not particularly surprised that a shift from the

nominal allocation policy was indicated, the magnitude of the shift

and the stability of certain of its features as established by a

sensitivity analysis provided new insight.

Upon completion of the study, both of the directors were con-

vinced that the model developed could contribute to the R&E planning

activity. They felt it could be helpful in achieving a consensus by

stimulating communication and by facilitating rational analysis on the

part of those involved in R&E planning. Essentially, they felt the

model captured enough of the problem for its solution to be an impor-

tant input to the R&E planning process. They felt comfortable with its

basic assumptions and its data requirements. In short, the directors

viewed the model as the sort of decision aid for R&E planning they had

been seeking for some time.

7. Concluding Remarks

The work reported here indicates both the desirability and the

practicality of applying this type of multiobjective decision analysis

methodology to resource allocation planning problems in industry.

While the technical level required for the analyst in using multi-

attribute utility concepts, nonlinear programming, etc., is fairly high,

the types of questions to which decision makers must respond are not

overly complex and in many cases are worth asking even if no formal

analysis is to be done. Of course, this type of approach is not ideally

suited to all such problems, but it does seem to provide a viable

approach for a class of problems that in practice heretofore has been


dealt with largely by intuitive means.

[1] Baker, N. and Freeland, J., "Recent Advances in R&D Benefit
Measurement and Project Selection Methods," Management Science,
Vol. 21 (1975), pp. 1164-1175.
[2] Cochrane, J. L. and Zeleny, M., editors, Multiple Criteria Deci-
sion Making, University of South Carolina Press, Columbia, South
Carolina, 1973.
[3] Cyert, R. M. and March, J. G., A Behavioral Theory of the Firm,
Prentice-Hall, Englewood Cliffs, New Jersey, 1963.
[4] Edwards, W., "How to Use Multiattribute Utility Measurement for
Social Decisionmaking," IEEE Transactions on Systems, Man, and
Cybernetics, Vol. SMC-7 (1977), pp. 326-340.
[5] Giauque, W. C. and Peebles, T. C., "Application of Multidimen-
sional Utility Theory in Determining Optimal Test-Treatment
Strategies for Streptococcal Sore Throat and Rheumatic Fever,"
Operations Research, Vol. 24 (1976), pp. 933-950.
[6] Gottfried, B. S., Bruggink, P. R., and Harwood, E. R., "Chemical
Process Optimization Using Penalty Functions," Industrial &
Engineering Chemistry Process Design & Development, Vol. 9
( 1970), pp. 581-588.
[7] Gottfried, B. S. and Weisman, J., Introduction to Optimization
Theory, Prentice-Hall, Englewood Cliffs, New Jersey, 1973.
[8] Hax, A. C. and Wiig, K. M., "The Use of Decision Analysis in
Capital Investment Problems," Sloan Management Review, Vol. 17
(Winter, 1976), pp. 19-48.
[9] Himmelblau, D. M., Applied Nonl inear Programming, McGraw-Hill,
New York, 1972.
[10] Howard, R. A., "The Foundations of Decision Analysis," IEEE
Transactions on Systems Science and Cybernetics, Vol. SSC-4
(1968), pp. 211-219.
[11] Huber, G. P., "Multi-Attribute Utility Models: A Review of Field
and Field-Like Studies," Management Science, Vol. 20 (1974),
pp. 1393-1402.
[12] Keefer, D. L., "A Decision Analysis Approach to Resource Alloca-
tion Planning Problems with Multiple Objectives," Ph.D. Disser-
tation, Department of Industrial and Operations Engineering, The
University of Michigan, 1976. Available from University Micro-
films, Ann Arbor, Michigan.
[13] Keefer, D. L., "Allocation Planning for R&D with Uncertainty and
Multiple Competing Objectives," to be published in IEEE Trans-
actions on Engineering Management.
[14] Keefer, D. L. and Gottfried, B. S., "Differential Constraint
Scaling in Penalty Function Optimization," AIlE Transactions,
Vol. II (1970), pp. 281-289.

[15] Keefer, D. L. and Kirkwood, C. W., "A Multiobjective Decision

Analysis: Budget Planning for Product Engineering," to be pub-
lished in Operational Research Quarterly.
[16 Keeney, R. L., "A Decision Analysis with Multiple Objectives: The
Mexico City Airport," The Bell Journal of Economics and Manage-
ment Science, Vol. 4 (1973), pp. 101-117.
[17] Keeney, R. L., "Multiplicative Utility Functions," Operations
Research, Vol. 22 (1974), pp. 22-34.
[18] Keeney, R. L. and Nair, K., "Evaluating Potential Nuclear Power
Plant Sites in the Pacific Northwest Using Decision Analysis,"
Professional Paper PP-76-1, International Institute for Applied
Systems Analysis, Laxenburg, Austria, 1976.
[19] Keeney, R. L. and Raiffa, H., Decisions with Multiple Objectives,
Wiley, New York, 1976.
[20] Kirkwood, C. W., "Superiority Conditions in Decision Problems
with Multiple Objectives," IEEE Transactions on Systems, Man, and
Cyberneti cs, Vol. SMC-7 (1977), pp. 542-544.
[21] Krischer, J. P., "Utility Structure of a Medical Decision-Making
Problem," Operations Research, Vol. 24 (1976), pp. 951-972.
[22] Murty, K. G., Linear and Combinatorial Programming, Wiley, New
York, 1976.
[23] Pearson, E. S. and Tukey, J. W., "Approximate Means and Standard
Deviations Based on Distances Between Percentage Points of
Frequency Curves," Biometrika, Vol. 52 (1965), pp. 533-546.
[24] Perry, C. and Greig, I. D., "Estimating the Mean and Variance of
Subjective Distributions in PERT and Decision Analysis," Manage-
ment Science, Vol. 21 (1975), pp. 1477-1480.
[25] Raiffa, H., Decision Analysis, Addison-Wesley, Reading, Mass.,
[26] Richard, S. F., "Multivariate Risk Aversion, Utility Independence
and Separable Utility Functions," Management Science, Vol. 22
(1975), pp. 12-21.
[27] Schlaifer, R., Analysis of Decisions under Uncertainty, McGraw-
Hill, New York, 1969.
[28] Spetzler, C. S. and Stael von Holstein, C.-A. S., Probability
Encoding in Decision Analysis," Management Science, Vol. 22
(1975), pp. 340-358.

Ralph L. Keeney
Woodward-Clyde Consultants
San Francisco, California

Gary L. Lilien
Massachusetts Institute of Technology
Cambridge, Massachusetts


A model and procedure are proposed to help design and position

products which are characterized by a high level of consistency between
product preference and purchase behavior. The procedure is based on
utility theoretic concepts for assessing preference and inferring
probable behavior. A numerical example is included.

Firms continually face positioning and design issues related to

their products. The product design issue is essentially: "What
physical and psychological characteristics would I like my (new or
revamped) product to have either to maximize profit, market share, or
more generally, the firm's expected utility?" This product design
question is relevant to both new and existing products. ~he charac-
teristics that can be controlled include price, several dimensions of
use or quality, packaging, flavor (perhaps), etc. The firm generally
has explicit control over physical quantities (price, taste, objective
performance) and implicit control, through message and communications
design, over the psychological quantities (a young-swinging beer, e.g.).
How should products be designed and how can those designs be
changed during a product's life cycle to meet a firm's objectives?
There have been two modes of analysis to help answer these questions.
One approach is that of the psychometricians. In particular, making
use of similarity data, Stefflre [13] developed "perceptual maps" of
market structures. He suggested introducing brands in areas of the
map which were relatively vacant. Other work has followed; a summary
of the multidimensional scaling literature is available in Green and
Carmone [4] and Green and Rao [5].

Another approach to the design question is provided by marketing

model builders. Many attempts have been made to establish functional
forms which relate product attributes, such as price, to output meas-
ures such as market share. Kotler [10) provides a review of much of
the literature.

A more recent, promising approach is one taken by Urban [14).

Urban uses psychometric techniques to map the space of consumers'
attitudes toward existing brands as well as an "ideal" brand. He
postulates that the farther a brand is from the ideal point, the lower
its market share should be. The procedure, which models the trial and
repeat-purchase processes separately, assumes that a brand's long run
market share is a parameterized quadratic function of its distance
from the ideal point. That function is then calibrated using actual
market share for a number of new brands quite well. But, due to data
requirements, the model's use would seem to be limited to frequently
purchased products. Virtually all work in these areas has been limited
to frequently purchased products, mainly due to the greater avail-
ability of data. Thus, the problem of analytically positioning con-
sumer durable products and most industrial products has been largely

An important difference between consumers purchasing packaged

foods, and companies purchasing fabricated materials is that such
industrial purchasing behavior conforms more closely to prescribed
criteria (see Webster and Wind [15)). Thus one may "prefer" Krispy
Krakers to "Whole oats" and purchase Whole Oats anyway due to "variety
seeking," availability, or other random behavior. The same incon-
sistency is not likely to occur in the purchase of hydrochloric acid,
where price and delivery terms are preeminant, or in the selection
by parents of a medical treatment for a baby's congenital defect.

The methodology suggested is designed to be used for precisely

those situations in which little data (of a repeat-purchase variety)
are likely to be available, but where the customers (individuals or
firms) purchase consistently with stated or inferred preferences.
The approach is based on utility theory (see Raiffa [11) for a dis-
cussion of the basic concepts of utility theory). It assumes customers
have a von Neumann-Morgenstern utility function defined over the pro-
duct variables -- that is, customers are expected utility maximizers
and "act as they should." Utility theory has been used in the past
mainly in a normative or prescriptive sense -- telling decision makers
what they should do in given circumstances. The market situations

considered here are, by definition, those in which individuals do what

they should.
Hauser and Urban [6) have developed a structure for models of
choice between finite alternatives. Their structure of the analytical
process of choice includes: (1) observation of behavior and measure-
ment; (2) reduction and abstraction -- reducing the number of product
dimensions to a few, "independent" ones and labelling them; (3) com-
paction, developing brand preference measures; (4) probability of
choice, relating preference to purchase behavior, and (5) aggregation,
transforming probability of purchase measures to market share measures.
The methods developed here suggest augmenting the observation
step, by measuring attitudes in face-to-face interviews, through
methods of direct utility function assessment as in [1), [8), [9). The
compaction operation then uses utility theory and the assumption of a
von Neumann-Morgenstern utility function obviates the probability-of-
choice step. Aggregation is performed by taking explicit account of
consumer heterogeneity throughout the procedure.
This procedure would be useful in industrial purchasing situations
where the purchasing agent or a suitable surrogate could be isolated
which has both explicit purchase criteria and bargaining power. The
purchase of graded goods (raw materials, fabricated materials, and
supplies) would be a common example.
The paper is organized as follows: Section 1 presents the formal
structure of the model and introduces notation. Section 2 develops
a general framework of analysis which is applied to a simple, hypo-
thetical example in Section 3.


Consider a well-defined product class in N firms F l , ... , Fn in a
single, specific market. Let mi be the market share of Fi' and assume
that each firm makes a single product.
Let the set of attributes Xl' X2 , completely characterize a
product, where Xl could be price, X2 could be reliability ratings, etc.
These attributes would be attained by factor analysis of a series of
well-defined product ratings or perhaps non-metric scaling procedures,
given a set of similarity judgments. Both methods have proven effec-
tive through neither has established "superiority." (Green, [3) The
output of these procedures, then, would be a reduced set of product
characteristics, Xl' •.• ,X J . A specific level of Xj is Xj so a product

is completely described by ~ = (xl' .:.,xJ ). The product of firm F.

will be denoted by ~i = (x l 1 , ••• , x J 1 ). A no-product purchase xO 1
(Xl' .•• ,xJ 0 ) could be included for completeness.
Customers will be designated by CI , ... , CK• It will be assumed
that each customer Ck has a von Neumann-Morgenstern utility function
uk(~l), where Ck's utility function is specified by the set of para-

meters 1 = (AI,· .• ,A R)· Assuming each customer buys a product, utility

theory suggests he should buy the product of firm Fi such that his
utility is maximized.
Since viewing the problem from the firm's point of view will
require the same methodology regardless of the specific firm, let us
take the viewpoint of firm Fl. Firm FI has certain objectives which
could include maximizing market share, maximizing profit, and so on.
We postulate that the objective function of FI is also specified by a
von Neumann-Morgenstern utility function vI over market share ml ,
profit and/or other variables.
There are uncertainties here for both firms and customers. The
firm wants to know utility functions for all customers in the market.
This information about customer heterogeneity will be expressed in the
form of a probability distribution PA(l) over the parameters 1 describ-
ing a randomly selected customer's utility function. Thus, customers'
utility functions are likely to differ so 1 does not take on a single
value, but, rather, is expressed as a probability distribution. Any
firm will not have perfect knowledge about the "true" PA(l) and will,
in general, attempt to estimate the distribution, entailing some error.
Thus, we might consider parameterizing the distribution to give
PA/e(l/~) where ~ = ~el, •. ·,eT) indicates the uncertainties of FI about
the true A. We quantify that uncertainty by the probability distribu-
tion P e (~) .
Consumers in general will differ in their knowledge or attitude
about the characteristics xi of firm Fi's product. From FI's per-
spective, this heterogeneity can be described by the parameterized
probability distributions P~ (~i/1i), i=I, ••. ,N, where 1i indicates
a set of parameters for firm F i • Again FI may be unable to estimate
these distributions without error, so uncertainty will exist which we
. I 2 N
quantlfy as P¢(1), where 1 = (1 ,1 ' ···,1 ).
To summarize here, our model contains (from FI'S view):
1. An objective function VI' which is a utility function,
known with certainty.

2. A distribution of utility functions uk(~/~) which vary across

the heterogeneous customer population quantified by PA/e(~/~).

The probability distribution Pe(~) quantifies this uncer-


3. A set of distributions of product-perceptions, pxi(~i/~i),

i=l, ... ,N, also varying across the heterogeneous customer-
population. The probability distribution P¢(~) quantifies
this uncertainty.


In this section, we first consider the decision an individual con-

sumer must make and how his decisions are inputs to the decision-making
processes of the firm. Then we focus on how firms can use the model
for product positioning decisions.

2.1 Consumer Decisions

The consumer must decide which if any of the products in the

market to buy given that he will buy at most one. Thus we explicitly
consider the case of a consumer not purchasing any product. Another
possibility would have been to define consumers as those who will buy
one product and then formally include uncertainty about the number of
consumers in the market.

Our model does not explicitly include individual consumer un-

certainty about product characteristics. Rather, Fi is uncertain both
about the set of consumer utility functions and the set of consumer
product perceptions. Explicit inclusion of consumer uncertainty would
needlessly complicate the problem. We will assume that Ck has a
utility function uk(~) and his choices are not to bu~ a product and

receive -XO or to buy the product of F.l

and receive
Xl, i=1,2, ... N.
He will choose the option x* giving him the highest utility where x*
is defined by


i=O,l, ... N

In the case of uncertainty, the consumer should choose the pro-

duct of firm F. which maximizes his expected utility. If pxik(x) repre-
l -
sents the judgment of consumer Ck about firm Fi'S product, the option
x* should be chosen such that

E[u (x*)]
k -

where E[uk(~*)l is the expected utility of the product of firm Fl

(where Fo designates the no product option) for individual Ck .

2.2 Firm Decisions Under Certainty

Under certainty, firm Fl should maximize its market utility vI.
Here we assume the distribution of utility functions is known and
that customers do not vary in their perception of brand characteristics
(that is ~ and !i, i=l, 2, ••. ,N, are known). The condition of cer-
tainty could be used as a first cut, as less information is required
here to attain a product design decision. Firm Fl has a product with
characteristics xl and the population utility functions are represented
by uk(~~) where P\(~) represents the population heterogeneity. From
(1) it follows that a consumer with parameters \ will choose firm Fl's
product if


where we neglect ties in terms of equally desireable alternatives.

The proportion of consumers ml who choose xl is

(3) ml = f p\ (~)d~,
where Al is that set of \'5 such that (2) holds. Neglecting ties,
Ao' Al,···,A N are mutually exclusive and collectively exhaustive so
L mi = 1 as required.

Suppose Fl ~s considering changing its product position from xl to

Then a new ml can be determined exactly as ml was using (2) and
(3) •

Suppose a new firm with utility function v is trying to enter a

volume inelastic market with a maximum profit product (~)i that is,
say v = m • G • (s - C(~» - d(~) where s is unit product selling
price, C is the unit cost of the product (~), G is the market volume,
m is its market share, and d(~) is the fixed development cost (plant,
R&D, etc.) associated with (~). Then v is the profit associated with
the new product. There will likely be a set Q of alternate product
positions, generically denoted by ~, that the new firm could attain.

Given the existing products in the market, for each possible x there
is a set A(~) (perhaps null) defined by

(4) A (~) : : {A: uk (y~) > uk (~ /~), i=O, 1, ... , N}

The best decision for this firm is to choose ~ in Q to maximize

(5) v(~) = [ [ (G • (s-C(~» - d(~) ]

where the first term in brackets in (5) represents the market share of
product (~) and v(~) is the profit associated with the product. Under
"nice" conditions it may be possible to simply differentiate v(~) with
respect to ~ to determine the product position to maximize profit.

2.3 Firm Decisions Under Uncertainty

The problems under uncertainty are parallel but more complicated
than those with certainty. The two sources of uncertainty are the
firms' imperfect knowledge about preference (or utility) heterogeneity,
characterized by ~, and imperfect knowledge about perceptual hetero-
geneity, characterized by 1i, i=O,l, ... ,N.
We will assume here, for simplicity, that perceptual hetero-
geneity and utility heterogenity are independent within individuals.
Fix the utility heterogeneity parameters as ~ and select a customer
with utility function u(~/~) at random. That customer's selection
will be from the set of product characteristic distributions
pxi(~i/1i), i=O,l, ... ,N. The product purchased should be x* such that

E[u(~*/~,1)]= max E[U(xi/A,~i)]

i=O,l, •.. N --

where E[U(x i /A,1 i )] indicates the expected utility using u(~/~) of

i i
x given ~ and 1 . That is,

(6) E[U(~i/~'1i)1= Ii u(~i/~) pxi(~i/1i)d~i.


As before, define Ai (1 J as the set of A for which x* = ~i , where

A is now dependent on 1. The expected market share of firm F i , given
e and1i, i=1,2, ... ,N, are known, is

If Fl's ut~lity function vI has market share as its argument, it

should choose x 1 to maximize its expected utility, given by


where P e , P¢ describe the uncertainty in Fl's measure of consumer's

preferences and perceptions respectively.


A number of authors (see Rao and Shakun [12], Gabor and Granger
[2], Kamen and Toman [7] have suggested that, in certain product
classes which offer nearly indistinguishable products, (such as gaso-
line, packaged soaps, etc.) price, as an indicator of quality (and per-
haps value) is the most important, if not sole determinant of purchase
behavior. Without delving into this subject we offer the following
simple example in which a single product-characteristic (say, price)
distinguishes market products. Thus, the product characteristics are
described by the single attribute X (price) and A, a, ¢i, for all i,
are univariate. The general problem is tractable with aid of a com-
puter, perhaps through simulation, if necessary, but a more complex
computer, perhaps through simulation, if necessary, but a more complex
example here would obscure the basic ideas of the method.
Let the set of utility functions of the consumers be

(10) uk(x/A) = AX - x k = 1, ..• , K.

where x is a price for the product and suppose the "true" distribution
of A among consumers is quantified by

(11) P A(A) = l~ , 0 < A< 10.

There are three firms F l , F 2 , and F3 and the consumers are hetero-
geneous in their perceptions of prices xl, x 2 , and x 3 as follows:
_(xi _ ¢i)2
(12) e 2 i 1, 2, ...

That is, the pi

are normal distributions with means ¢l, ¢2, and ¢3
respectively and unit variances (see Gabor and Granger [2] for empiri-
cal justification using xi = log price) .

3.1 Consumer Decisions

Given the uncertainty encoded by (12), the expected utility of

product x to consumer Ck is

i i i
(13) E[Uk(X 11.)] = J uk (xiI.) Px(x/¢ )dx
x (x_¢i)2
2 1 2
J (Ax-x) e dx
x 2 'IT

where clearly ¢i is the mean value of x , or mean perceived price.

Let us assume that

1 2 3
(14) ¢ = 2, ¢ = 3, and ¢ = 5.

Now we want to find the values of A for which xl, x 2 , and x 3 are
1 2
preferred. From (13), it follows that x is preferred to x on average
i f and only if

1 2
which holds i f A < ¢l + ¢2. Similarly x is preferred to x if and
2 3
only i f A < ¢l + ¢3 and x is preferred to x if and only if A < ¢2+ ¢3.
From this, since ¢l < ¢2 < ¢3, we have

x I \1. < ¢l + ¢2 5

(15 ) x2 ) preferred iff <. ¢l + ¢2 < A < ¢2 + ¢3

3 I I -
x \ { A .::. ¢2 + ¢3 8
-' "-

The market share of firm Fl should be

(16) m = J5 P A (A)dA J5 1 1
1 10 dA 2·
A=O 1.=0
3 2
In an analogous fashion, m = TO and m3 10·

3.2 Firm Decision Under Certainty

Let us take a look at one special problem here. Suppose a new

firm F4 were to introduce a new product to compete with xl, x 2 , and x 3 .
If the consumer distribution of x4 is to be characterized by ¢4 and

(17) e

what is the best value of ¢4 (mean perceived price) to maximize m4 ,

the market share of firm F 4 . (We are assumi~g fi:m F4 accepts the
other information in the problem, e.g. the pl(X/¢l)). From analysis
similar to that leading to (15), it follows that if ¢4 < ¢l,

preferred iff

¢4 + ¢l
so m4 = 10

~4 + ¢2 _
m4 10

¢3 _ ¢2
m4 = 10 ; and > ¢3, m4 = 0 since x4 would only be pre-
ferred if A > ¢4 + ¢3 = 5 and A 10. Thus, it is clear
that under certain knowledge on the part of the firm F 4 , the optimal
¢4 subject to (17) is ¢4 + (: = ¢l = 2 for some small (: . Then m4 =
¢l + ¢4 4 - (:
10 ---ro-
3.3 Firm Decisions under Uncertainty

Return now to the three firms with three products described by

(12) and suppose the class of consumer utility functions is given by
(10) . Suppose that Firm F l , whose point of view we will take, feels

(19) fA (A/e) = e1 ' 0 ~ A ~ 8,

but due to the lack of available data, the firm's market research team
feels there is some uncertainty about the true 8, which is character-
ized by

(20) p 8 (8) = 3" ' 9 < 8 < 12.

Suppose that x 2 is the "standard product" in the market that

2 1
everyone knows well and ¢ = 3. Our own product x and the competitor's

x 3 are newer so they are subject to more consumer variability. So let

~l be uniformly distributed from 1 to 3 and let ~3 be uniformly dis-
tributed from 4 to 6. If these perceptions are independent, we have

(2l) 1
"4 '
1 < ",1 < 3
~ = 3 ,

It is easy to see that, as before ~l < ~2 < ~3 so analogous to

(15) ,

(22) :: ( preferred iff


Hence to calculate the probability distribution ml for F l , we just

calculate the probability A < ~l + ~2. Note that the distribution of
~l + ~2 is uniform from four to six so if a = ~l + ~2;

(23) Pa(a) 2" ' 4 2. a 2. 6.

Refer to Figure 1 which shows the possible combinations of a and a.

From (20) and (23), i t is clear that p(a,a) = '6. Note that ml = e


9 10 11 12
Figure 1

a min 1 a max 2 1
so at worst, ml =
-a--- = 3 and at best ml = -a--,- = 3· Hence for 3
max m1.n
< ml < j , the probability ml < m equals t t + P(% < m). Integrating

over the appropriate regions of Figure 1, we get the cumulative prob-

ability distribution for ml which is differentiated to yield

""'"0 ml ~"3
\ 12 - 4
--2' 1 < !.
\ 3m "3 < m - 2
(24) Pm (m) =/ 21 4 < m < 1
l 4" 9 "2
I 3 27 !.2 < m < 2
2"- 4" "3
I m
\.....0 2
m < "3

This probability distribution is shown in Figure 2. The expected

market share is given by
12 6
(25) E[m l ] J J ~(!.) do.d6 0.466.
6=9 0.=4 6 6

If firm Fl's preferences are quantified with the utility function

v l over various market share levels, then one can simply use the
probability distribution Pm (m) and vl(m) to calculate the expected
utility. Given a choice am~ng options whose impacts are quantified
by probability distributions over market share, firm Fl should calcu-
late the respective expected utilities and choose the option
associated with the highest.

o 1/3 5/12 6/12 7/12 2/3

m=market share

Figure 2


For a firm to utilize such a model, it would need to assess its
utility function v and select parametric models for the population
utility functions their heterogeneity of preference

PA/e(~/~)' and perceptions P~(~i/fi), i = 1,2, •.. ,N, of the products.

A market analysis team can model these features, utilizing limited
interviews with customers. Then the parameters ~ and fi i = 1,2, •.. ,N,
could be quantified in a consumer survey. The relative frequency of
responses could be smoothed to model the parameter distributions
Pe(~) and P i(f ), i = 1,2, .•. ,N.
As presented, the procedure is in a conceptual, proposal-for-use
state. The work of Hauser and Urban [6], using some related concepts,
has shown promising results. It is however too early to assess the
applicability of this methodology.


1. Fishburn, P.C. "Methods of Estimating Additive utilities,"

Management Science, Volume 13 (1967), p. 435-454.

2. Gabor, A. and D. W. J. Granger. "Price as an Indication of

Quality: Report on an Enquiry," Economica, Volume 33,
No. 129 (February 1966).

3. Green, P. E. "Marketing Applications of MDS: Assessment and

Outlook," Journal of Marketing, Volume 39, No.1 (January
1975), p. 24-31.

4. Green, P. E. and Frank J. Carmone. Multidimensional Scaling and

Related Techniques in Marketing Analysis (Boston, MA:
Allyn and Bacon, Inc., 1970).

5. Green, P. E. and Vithala R. P~o. Applied Multidimensional

Scaling (New York: Holt, Rinehart and Winston, Inc., 1972).

6. Hauser, John R. and Glen LT. Urban. "A Normative Methodology for
Modeling Consumer Response to Innovation," Operations
~esearch, Volume 25 (July-August 1977), p. 579-619.

7. Kamen, Joseph M. and Robert J. Toman. "Psychophysics of Prices,"

Journal of Marketing Research, Volume VII (February 1970),
p. 27-35.

8. Keeney, R. L. "Multiplicative Utility Functions," Operations

~esearch, Volume 22 (January-February 1974), p. 22-34.

9. Keeney, R. L. and Raiffa, H. Decisions with Multiple Objectives,

(New York: John Wiley, 1976).

10. Kotler, Philip. Marketing Decision Making: A Model Building

Approach (New York: Holt, Rinehart and Winston, 1971).

11. Raiffa, H. Decision Analysis. (Reading, MA: Addison-Wesley,

1968) .

12. Rao, Ambar G. and ~!elvin Shakun, "A Quasi Game Theory Approach to
Pricing," Management Science, Volume 18, No.5 (January 1972),
Part II, p. P-IIO-123.

13. Stefflre, Volney. "Market Structure Studies: New Products for

Old Markets and New Markets (Foreign) for Old Products," in
Frank Bass, Charles King and Edgar Pessemier, eds.,
Applications of Science in Marketing ~~.anagement (New York:
John Wiley, 1968), p. 281-268.

14. Urban, Glen U. "PRECEPTOR: A Model for Product Positioning,"

Hanagement Science, Volume 21, No. 8 (April 1975),
pp. 858-871.

15. Webster, Frederick E., Jr. and Yoram ~.Jind, Organizational Buying
Behavior (Englewood Cliffs, N.J.: Prentice-hall, 1972).

Craig W. Kirkwood
Woodward-Clyde Consultants
San Francisco, California 94111

Public sector decision makers are often concerned about the pref-
erences of various groups that will be affected by their decisions.
This paper discusses the use of utility theory for analyzing such
situations. A conflict between Pareto optimality and certain equity
considerations is identified. Also, a number of practical difficulties
in applications work are discussed, and areas for further research are
Often decisions made by public sector decision makers have dif-
ferent impacts on various groups within society. For example, some of
the controversy over proposed federal energy policy involves this
difficulty. Taxes on large automobiles might have greater impact on
large families than small; also, they may lead to decreased automobile
construction which would hurt the economies of parts of the country
where this industry is centered. Similarly, natural gas price deregu-
lation might transfer income from gas consuming regions to those that
produce it.
Howard [5] has proposed the use of decision analysis in public
sector decision problems. He comments that this approach allows the
systematic, logical analysis of both uncertainties and preferences. In
this paper we examine the incorporation of the preferences of different
groups into a decision analysis for public sector decision making. We
show that there is no way to do this which is guaranteed to be both
efficient (in the sense that no other solution will make some group
better off without making another worse off) and equitable (i.e.,
considers the different impacts on various groups). In addition, a
number of practical difficulties that must be considered when incor-
porating the preferences of different groups into a decision analysis
are discussed. Some areas for further research are indicated.

Decision analysis provides a mathematical approach to the analysis
of decision problems [4, 13]. The method can be summarized as follows:

the set of alternative decisions {aI' a 2 , ••• ,a n } available is described

as well as the set of possible consequences {c l ' c 2 ' ... ,cm} of these
alternatives. The uncertainties about which consequence will result
from each alternative are summarized in probability distributions
Pl(c), P 2 (c), •.. ,P n (c), where Pk(c) is the probability of consequence c
given that a k is selected. The decision maker's preferences for the
various consequences are summarized in a von Neumann-Morgenstern util-
ity function U(c). If the axioms of decision theory [12], which most
people find compelling as prescriptive rules for decision making, are
to be obeyed then the alternative a. which maximizes the expected

Ej[U(C)] :: 2: U(c)Pj(c) (1)

should be selected.
Decision analysis has been successfully applied to numerous deci-
sion problems. (See [2,6,7] for examples.) However, most of these
applications have not explicitly considered the situation where a pub-
lic sector decision maker is faced with groups that have differing
judgments about uncertainties and conflicting preferences. In such
situations it is not clear whose probabilities Pk(c) and utilities U(c)
should be used in the analysis. In this paper we will concentrate on
the determination of u(c).
It seems that in a democratic governmental system the "social"
utility function that is to be used for the decision analysis should be
constructed taking into account the differing preferences that exist.
Howard [5] suggests that
[some] body would be concerned with characterizing the
preferences of society. This body would be heavily
influenced by the desires of all citizens. Voting pro-
cedures could be developed for determining citizen
preferences. However, the values of society should
change slowly: major value changes would require the
same care currently devoted to a constitutional amend-
Howard does not formulate this approach mathematically, however, a
representative government approach implies that the social utility
function should be a function of the utility functions of the
society's citizens. One mathematical formulation of this is

where uk' k = I, 2, .•• ,N, is the utility function of the kth citizen in
the society and u is an unspecified function.

In practice it may not be feasible to actually measure the uk's

directly using standard decision analysis techniques -- the time and
resources required would be too great. Howard suggests that citizen
preferences might be inferred by having votes on important social ques-
tions such as the value of life or of health. Kirkwood [11] suggests
that the decision maker might consider the uk's to be uncertain quan-
tities and use decision analysis techniques to find probability distri-
butions for them. These distributions could then be updated as addi-
tional data was obtained about the various different group preferences.

We shall show in the next two sections that, regardless of what

approach is taken to finding the preferences, as long as U is a func-
tion of the individual preferences uk' k = 1, 2, ... , N, as indicated in
(2) then the resulting social utility function will have characteris-
tics that may be undesirable in some circumstances.

A central concept in many studies of social decision making has
been Pareto optimality [3]. An alternative is Pareto optimal if there
does not exist another alternative that is at least as acceptable to
all society members and definitely preferred by some. The Pareto
Optimality Criterion specifies that in any social decision problem a
Pareto optimal alternative should be selected.
This criterion seems reasonable since if a Pareto optimal alterna-
tive is not selected then there is another choice which will be more
preferred by at least one person and not less by anyone.
There seems to be no reason not to select the alternative that makes
someone better off. However, imposing this condi'tion restricts the
form of the social utility function as the following theorem shows.

Theorem 1. The only continuous social utility function U(c) =

u[u 1 (c), u 2 (c) , •.. , uN(c)] which results in decisions that obey the
Pareto Optimality Criterion is
U (c) = 1: "kuk (c) (3)

where "k' k 1, 2, ..• , N, are arbitrary constants greater than zero.

Proof. Because of the cardinality property of von Neumann-Morgen-

stern utility functions [14; pp. 627-28] we can assume without loss of
generality that each uk is scaled such that individual k's least pre-
ferred consequence has uk = 0 and his most preferred consequence has

Uk = 1. Similarly, it can also be assumed without loss of generality

that U is scaled so that U = 0 if every individual receives his least
preferred consequence simultaneously.
The proof proceeds by considering certain specific alternatives
and showing that applying the Pareto Optimality Criterion to these
forces (3) to hold. First, let a l (u l ,u 2 , ..• ,uN) be an alternative
which has a probability liN of yielding a consequence which simultane-
ously has a utility u l for individual 1, u 2 for individual 2, etc., and
a probability (N-l)/N of yielding a consequence with a utility of 0 for
each individual. Let a 2 (u l ,u 2 , •.. u N) be an alternative with a proba-
bility liN of yielding each of N different consequences where the kth
consequence has utility uk for individual k and utilities of 0 for all
the other individuals.
We will show that the social utility function must be indifferent
between a l and a 2 for all u l ,u 2 ' ... ,uN. Suppose this were not true.
Then there would exist at least one u l ,u 2 , .•. ,uN such that a l and a 2
are not equally preferred. Suppose a l is preferred to a 2 . Then, since
U is a continuous function there exists an E > 0 such that a l (u I -E,u 2 -
E, ... ,UN-E) is preferred to a 2 (u l ,u 2 , ..• ,uN). But this violates the
Pareto Optimality Criterion since every individual prefers a 2 (u I -E,u 2 -
E, .•. ,UN-E) to a l (u I -E,u 2 -E, ... ,uN-E). Thus, if a l and a 2 are not
equally preferred a 2 must be preferred to a l . But, since U is contin-
uous, this would mean there exists an E > 0 such that a l (u I +E,u 2+E,
.•. ,UN+E) is less preferred than a 2 (u l , u 2 ' ... ,uN). But, by the same
reasoning as above, this violates the Pareto Optimality Criterion.
Thus a l is equally preferred to a 2 for all u l ,u 2 '··· ,uN'
The expected utilities of the two alternatives are

and N

~ ~ u(O,O, ... ,O,uk,O, .•• ,O). (5)

Since they are equally preferred, their expected utilities must be
equal and (4) and (5) can be combined to yield

~ u(O,o, ... ,o,uk'O, ... ,O) . (6)


Now let a~(uk) be an alternative with a probability uk of yielding

a consequence with a utility of I for individual k and utilities of 0

for all other individuals and a probability I-uk of yielding a conse-

quence with utilities of 0 for all individuals. Finally, let a~(uk)
be an alternative which is certain to yield a consequence with utility
uk for individual k and utilities of 0 for all other individuals. By
exactly the same reasoning that was applied to alternatives a l and a 2 ,
the Pareto Optimality Criterion requires that a~(uk) and a~(uk) be
equally preferred for all values of uk and for all k.

The expected utilities of these alternatives are

(where the kth entry in u is the only non-zero one) and


Since these must be equal, equations (7) and (8) can be combined to

u(O,o, ..• ,o,uk'o, ... ,O) = uku(o,o, ... ,0,1,0, ... ,0). (9)

Substituting from (9) into (6) yields (3) if Ak := u(O,O, ... ,O,l,O, ... ,
0). It is easy to show that the Ak'S must be greater than zero.
Hence, the theorem is proved.

This result seems convenient since the Pareto Optimality Criterion

is very compelling and the resulting social utility function has a par-
ticularly simple form. However, as the next section shows, there are
undesirable properties of this utility function.


The discussion of equity is most easily introduced with a simple

example. Suppose a public sector decision maker for a two-person
society has a decision problem with two alternatives,

Alternative I: there is a 50-50 chance of either u l 1

and u 2 = 0 or u l = 0 and u 2 = I,

Alternative II: there is a 50-50 chance of either u l 1

and u 2 = 1 or u l = 0 and u 2 = O.

It is easy to show that each individual is indifferent between Alterna-

tives I and II. Thus both alternatives are Pareto optimal. In a
situation like this where the individual members of the society do not
care which alternative is selected it seems that the decision might

reasonably be made with some concern for equity. In particular, the

equity of the various possible final consequences might be taken into

In this example, if Alternative II is selected either both indivi-

duals receive their more preferred consequence or both receive their
less preferred. Regardless of what happens, the results are equal for
both people. With Alternative I, however, there will always be an
inequity in the consequence received -- one individual will receive his
more preferred consequence while the other will receive his less
preferred one. The egalitarian theories that underlie many modern
domocratic government operations might argue that such inequities are
undesirable, and, hence, that Alternative II should be preferred.

If the two individuals were not indifferent between the alterna-

tives then it would be more complicated to consider the equity issue.
However, in this paper we shall not need to consider that case. Fur-
thermore, we will not even need to say which of the two alternatives
above should be preferred but only that the social utility function
should make it possible to choose between them. This is formalized in
the Equity Criterion: even though each member of a society is indif-
ferent between two alternatives, it is possible that the social utility
function is not indifferent between them.

Notice that the Criterion does not force the social decision-
making process to have a preference between any two alternatives which
are equally preferred by each member of the society it only states
that a preference is possible. However, the following theorem shows
that even this weak condition is not compatible with the Pareto Opti-
mality Criterion.

Theorem 2. There does not exist a continuous social utility

function U(c) = u[u l (c) ,u 2 (c) , ... ,uN(c)] which results in decisions
that obey both the Pareto Optimality and Equity Criteria.

Proof. Suppose every individual in the society is indifferent

between two alternatives a l and a 2 . If the Pareto Optimality Criterion
holds, then by Theorem 1 equation (3) holds and the expected social
utilities of the two alternatives are



E [ul a 2 ] = LAkE [Uk la 2 ] (ll)

However, since all individuals are indifferent between a l and a 2 , the

expected social utilities given by (10) and (11) are equal. Since this
will be true for any a l and a 2 that all individuals are indifferent
between, the Equity Criterion is violated.
The results in this section and the last are somewhat discourag-
ing -- the conflict between Pareto optimality and equity is not very
appealing. However, past research into aggregation of individual
preferences has often resulted in similar conclusions that it is impos-
sible to meet various different desirable criteria simultaneously.
(See, for example, Arrow's classic work [1].)

Suppose we are willing to use the weighted additive social utility

function (3) or decide not to worry about Pareto optimality and chose
to use some other function [9]. Then there are still difficulties re-
maining in trying to incorporate the preferences of various groups into
a decision analysis. These are considered in the remainder of this


Suppose the decision maker imposes the Pareto Optimality Criterion

so that (3) holds, and suppose further that the individual utility
functions uk' k = 1, 2, ..• ,N, have been assessed. Then to complete the
assessment of the group utility function it is still necessary to (i)
decide on the scales for each of the uk's and (ii) determine the scal-
ing constants Ak , k = I, 2, •.. ,N.
Deciding on the scales for the uk's is necessary since von New-
mann-Morgenstern utility functions are cardinal measurement scales.
That is, as is true with temperature scales, a utility scale is only
determined to within a positive linear transformation for any utility
function. Thus, two constants need to be determined to completely
specify the scale of each function. In a social decision analysis the
setting of these constants is essentially carrying out an interpersonal
comparison of preferences. For example, if the consequences of inter-
est in a particular decision problem are monetary sums, each individual
in the society may have different preferences for specified monetary
sums. Setting the scales on each Uk will specify what the different
individuals' preferences are on a common measuring scale -- the utility

scale. Unfortunately, unlike the situation with temperature scales,

there is no external standard to which different peoples' utility
functions can be compared to set the scaling between them. Each per-
son's preferences are locked up in his own head and language is a
rather imperfect way of communicating what the preferences are.
There seems to be no way to completely resolve the problem of
interpersonal comparison of preferences. Reference [II] has suggested
that it might be handled in applications by having the decision maker
decide on consequences which he judges to be equally preferable for the
various society members. He would then set the utilities of these
consequences equal for the different individuals. For example, if
monetary sums were of interest he might decide that for a particular
decision problem each individual had the same preference for his own
current annual income. Thus, if x k is the annual income of individual
k, the Uk'S would be scaled so that ul(xk) = uk (xk) , k = 2, 3, ..• ,N.
As noted before, there are only two unspecified constants in each Uk so
it is only necessary to make the interpersonal preference comparison
twice in order to completely determine the scales of the different
This method seems to be usable in practice, however, further
research would be useful to simplify the work needed.
Determination of the scaling constants Ak , k = 1, 2, •.. ,N, is com-
plicated by the fact that the various values of the utility functions
Uk' k = 1, 2, .•. ,N, have no direct physical meaning. Thus, the stand-
ard techniques used in multiattribute utility analysis to determine
scaling constants [10] are difficult to apply since they involve find-
ing indifference relations between different points in the (u l ' u 2 ' .•. ,
UN) space. This is difficult to do if there is no direct physical
meaning for the different Uk values. For example, if there are two
individuals in the society, how do you decide whether (u l = .5, u 2 =
.3) is preferred to (u l = .4, u 2 = .4) since it is not easy to figure
what the different Uk values mean?
One method is to use only the set of consequences that was used to
determine the scales of the Uk's (as discussed above) when finding the
Ak's. The points in this set have well specified physical meanings,
however, when they are used the decision maker must assess his prefer-
ences among fairly complicated situations that have uncertainty. (See
Kirkwood [II] for details.) This is sometimes difficult, and further
research to find easier ways of determining the Ak's would be useful.


In the last section we assumed that the individual utility func-

tions uk' k = 1, 2, ... ,N, were known (except for the interpersonal
scaling problem). However, in practice it may be difficult to deter-
mine the utility functions. utility assessment can be somewhat time
consuming, and if the utilities of a fairly large number of individuals
are of interest then the time and resources may not be available to
assess all the utility functions. Reference [11] has suggested that
the uk's might be treated as uncertain quantities if they could not be
directly assessed, and the decision maker could assess probability
distributions over them as he would for any uncertain quantity in a
decision analysis. However, this is complicated by the fact that each
uk is a function. Rather than having only an uncertain numerical
quantity, which is the usual situation in decision analysis, we must
deal with an uncertain function. More formally, we are interested in a
random process rather than a random variable.


This paper has discussed the use of utility theory in social

decision analysis. It was shown that a number of theoretical and prac-
tical problems remain before utility theory can be used on a routine
basis. However, the approach appears promising and some applications
work has been carried out [8, 11].


1. Arrow, K. J., Social Choice and Individual Values, 2nd edition,

Wiley, New York, 1963.
2. Hax, A.C. and Wiig, K.M., "The Use of Decision Analysis in Capital
Investment Problems," Sloan Management Review, (Winter, 1976), pp.
3. Henderson, J. U., and Quandt, R.E., Microeconomic Theory, McGraw-
Hill, New York, 1958.
4. Howard, R.A., "The Foundations of Decision Analysis," IEEE Trans-
actions on Systems Science and Cybernetics, Vol. SSC-4, (1968),
pp. 211-219.
5. Howard, R.A., "Social Decision Analysis," Proceedings of the IEEE,
Vol. 63, (1975), pp. 359-371.
6. Howard, R.A., Matheson, J.E. and North D.W., "The Decision to Seed
Hurricanes," Science, Vol. 176 (1972), pp. 1191-1202.
7. Keeney, R.L., "A Decision Analysis with Multiple Objectives: the
Mexico City Airport," Bell Journal of Economics and Management
Science, Vol. 4, (1973), pp. 101-117.
8. Keeney, R.L., "A Utility Function for Examining Policy Affecting
Salmon on the Skeena River," Journal of the Fisheries Research
Board of Canada, Vol. 34 (1977), pp. 49-63.
9. Keeney, R. L. and Kirkwood, C.W., "Group Decision Making Using
Cardinal Social Welfare Functions," Management Science, Vol. 22
(1975), pp. 430 - 437 .
10. Keeney, R. L. and Raiffa, H., Decisions With Multiple Objectives,
Wiley, New York, 1976.
11. Kirkwood, C.W., "Decision Analysis Incorporating Preferences of
Groups," Technical Report No. 74, Operations Research Center,
Massachusetts Institute of Technology, Cambridge, Massachusetts,
June, 1972.
12. Pratt, J.W., Raiffa, H. and Schlaifer, R., "The Foundations of
Decision under Uncertainty: An Elementary Exposition," Journal
of the American Statistical Association, Vol. 32 (1964), pp. 122-
13. Raiffa, H., Decision Analysis, Addison-Wesley, Reading, Massachu-
setts, 1968.
14. von Neumann, J. and Morgenstern, 0., Theory of Games and Economic
Behavior, 3rd edition, Princeton University Press, Princeton, New
Jersey, 1953.

J. S. H. Kornbluth
Jerusalem School of Business Administration, The Hebrew University


In this parer we present an interactive method for ranking items subject to

an (initially) unspecified linear utility function. The method is economical on
the number of paired judgements that must be made by the decision maker and leads
to the identification of the desired ranking and the space of weights for the cor-
responding linear utility functions which would lead to this ranking. In the simu-
lation tests on random data it is shown that the number of comparisons that must be
made at each stage tends to be less than n+l where n is the number of criteria being

In recent years much attention has been given to problems of multicriteria
optimization, and in particular to the problems of finding efficient (undominated)
solutions to the multi-objective linear programming problem (MOLP). See for example:
Roy (1971), Philip (1972), Evans and Steuer (1973), and the comprehensive biblio-
graphies in Cochrane and Zeleny (1973) and Zionts and Wallenius (1976a ). The general
aim of these approaches is to find the set of all the efficient solutions, which
will include the decision maker's "preferred solution". Zeleny (1971) extends the
analytical framework of MOLP by using the concepts of 'entropy' and the 'displaced
ideal' to help in the search for the preferred feasible solution.
The general assumption in multiple objective analysis is the existence of an
initially unspecified linear utility function, and one of the results of the analy-
sis of the ultimate choice by the decision maker (DM) of ~ particular efficient
solution is that we can estimate the weights that are appropriate for his linear
utility function. In general these weights are not unique, but are in a space bound-
ed by linear constraints. See Kornbluth (1974).
The problem of assessing the weights of a decision maker in an unconstrained
case has been given much attention in the psycho-metric literature. For example
Srinivasan and Shocker (1973a , 1973b ) give an LP based method for the calculation
of the ideal point and weights for a decision maker. The starting point for their
analysis is a set of paired comparisons (on a forced choice basis). Ideally for m
items one would like to have m(m-l)/2 pairwise judgements although the method can
be applied where some judgements are missing. The LP is used to firxi the ideal
point and weights which minimize the violations of the rank orders of the initial

In this paper we will develop an interactive method for ranking a given set
of items with multiple attributes or scores which does not require the DM to make
explicit assessments of marginal tradeoffs between scores. In so doing we will be
able to estimate the space of weights which correspond to the DM's attitudes as
revealed in the interactive analysis. We will also present some simulation results
which suggest that the DM need only consider a small proportion of the total number
of paired comparisons, thus the method offers a considerable saving in the amotmt
of input required from the DM. Finally we will suggest possible applications of
the method and areas for further research. The initial exposition is made without
proofs. Where necessary these are presented in the Appendix.


We assume that there are m items (proj ects, stimuli, etc.) each with n charac-
teristics (scores, criteria, attributes, etc.) given by the vectors x = { xki ; k =
l ... m, i = l ... n}. We assume that the decision maker DM has (as yet tmspecified)
weights A = {Ai' i = l ... n} such that item p is preferred to item q if

A. (x - x ) > 0
p q

AEA = {Aln i = 1, Ai > O} (1)

We will use the sign ~ to designate preference, thus p r- q implies that item p ap-
pears before item q in the preference order.

Let n be a permutation of the numbers 1 to m, and represent the present order

of the m items. We assume that the set of items has already been arranged in an
order n that is feasible (consistent) for some linear utility function, i.e., for
some AE Awe have:
j = 1. .. m-l (2)

where n(j) is the item in the j 'th position of the order n. The initial order n
could correspond to some simple ranking by one of the n criteria using a second
criterian to break any ties, and so on. Finally, we assume that for all orders,
any ties can be broken by slight perturbation. (See Appendix).

Let ~ = {AIAEA, A satisfies (2)}. If n is the ordering preferred by the DM

(or accepted by him as "correct"), he can be "indifferent" to the use of any AE A
as the weights for a linear utility measure since any such A will produce the order
n. See Kornbluth (1974). Conversely if AE'b is acceptable by the DM as the appro-
priate set of weights for such a utility function, he should accept the order n as
his preferred ordering. The purpose of this paper is to present a method whereby
the DM can progressively change n and approach his desired order n* and the asso-
ciated A space ~* EA.

We note that the set of all the feasible orderings {n} induce a partition of
A, i.e.:

and that in defining In

it is only necessary to consider constraints fonned by ad-
Jacent pairs of elements in Q. The remainder are trivially satisfied. Each set
~ is determined by the set of linear constraints (2) - and in particular by the
tight constraints for which equality can hold in (2).

Assune that the items k,R. are adjacent in the current order Q, and that the

A. (x. - x ) > 0
.K R.-

is a tight constraint. The operation of switching the order of k, R. in Q to R., k

in Q' say is equivalent to IIDving from the space In to an adjacent space ~, across

the boundary determined by (3). If Q is a feasible order then Q' will also be fea-
sible (see Appendix). The method suggested for analysing the J1.1's preferences and
for determining Q* is as follows:

i. identify an initial feasible ordering (say lexicographical ordering or equal

weights on all criteria)
ii. identify the binding constraints of ~

iii. present the DM with the list of pairs in Q which determine the boundary of
In (and their associated characteristics)

iv. stop if this order is accepted by the DM, otherwise

v. ask the J1.1 to switch one pair from the set presented in iii. am. go to step
Using this method, the anDunt of material that the J1.1 need review at each stage
is kept to a rndrdlmwn. He need only inspect the binding constraints am. the asso-
ciated pairs of elements. He need not review pairs of elements whose position in
the ordering is determined autollBtically by other (binding) pairs. The pairs that
are automatically ordered correspond to the slack constraints of ~, and will play
no part in the present stage of the analysis.

It should also be noted that the method lJl9.intains feasibility throughout the
analysis. Starting from an initially feasible order we IIDve via feasible orders
until the desired end-point is reached, thus we avoid any paired Judgements which
might cause violations in the order.

The Identification of In:

Given the feasible order 01, let An be the matrix whose rows An(i) are given

by x n( i) - x n( 1+1) . The closure of An is the set OJ such that:


n.l =1 (4.2)

A.l - (4.3)

The boundary of IU is deternlined by those rows of (4.1) which are tight (active)

constraints for some value of A. For such rows the value of min An(i)' A subject to

(4.1) - (4.3) is zero. Since we expect that m will in general be greater than n,
it is more convenient to consider the dual problem:

max p

]l.A n + !.P ~ c

]l ~ 0, p unconstrained (5)
where! is a column with unit entries and c is a row of A(l' sa:y An(i)'

By LP duality, if (5) has a feasible solution for which p* = 0, then the row
An(i) represents a binding constraint for IU and the associated pair (n(i), 01(1+1»

represent a binding pair. Furthermore, a strictly positive value for any variable
l1c at an optimum of (5) implies that the row An(k)' A .:: 0 is a binding constraint
for IJ.. This fact can be used to reduce the number of dual problems that need to
be solved in order to determine the set of binding pairs of elements in n. Once a
column k has been identified as being basic at an optimal solution we need not solve
(5) using c = An(k)' Assuming that m » n, the dual problems will have many more
columns than rows and its solution will, in general, be relatively simple. (For
alternative approaches to the problem of identifying efficient vectors see Zionts
and Wallenius (1976b ).)

Consider a list of seven items each with three characteristics (m = 7, n = 3).

item characteristics
1 8 1 4
2 6 4 2
3 5 8 0
4 4 0 6
5 3 2 6
6 2 7 1
7 2 4 3
na = (1,2,3,4,5,6,7) is a feasible order. The associated A matrix is:

( 2
8 -6
I 1 -2 0 I

-2 )
In order to identify the space Aa we solve the series of problems:

max p

2].11 + ].12 + ].13 + ].14 + ].15 + p < cl

-3].11 - 4].12 + 8].13 - 2].14 - 5].15 + 3].16 + p < c2

2].11 + 2].12 - 6].13 + 5].15 - 2].16 + p < c3

].11 ::. 0, i = 1. .. 6, P unconstrained

where c = (c l ,c 2 ,c 3 ) are rows of Aa in turn.
For c = (2,-3,2) the solution is p* > O. ].12 and ].14 are basic (and strictly positive)
irrplying that the pairs (2,3) (4,5) are binding. The pair (1,2) is not binding; it
is a redundant constraint. Since (2,3) is already identified as a binding pair we
need not solve for c = (1,-4,2).

For c = (1,8,-6) the solution is p* > O. ].14 is again basic, etc .... The bin-

ding constraints are:

(2,3) (4,5) (5,6) (6,7)
Given these four preferences, 2r3,4~5,5~6,6~7, the remaining orders lr2 and 3r4 are
automatically determined. In Fig. 1 we give a representation of Aa drawn onto the

triangle A= I
{>. D. i =l,"1 ::. a}. Each constraint is marked with the corresponding
binding pair, each element of the pair is marked on the side of the constraint on

which it is preferred to the other. Thus for the constraint (6,7) item 6 is pre-
ferred to item 7 in any area to the left of the constraint; item 7 is preferred to
item 6 in any area to the right. 'The area \ is bounded by the four constraints
(2,3), (4,5), (5,6), (6,7) and is marked 'a' on the diagram.

Figure 1; A Spaces for some feasible orders ( only tight constraints included )

Let us suppose that the DM is unsatisfied with the relative positions of item
2 and 3, and suggests that item 3 should be ranked above item 2. 'The change of or-
der from Qa = (1,2,3,4,5,6,7) to Qe = (1,3,2,4,5,6,7) implies a move across the
constraint (2,3) to the space A. For the order Q , the binding preferences are
e e
(3,2), (4,5), (5,6).


One of the major problems in any analysis of preference or utility is the num-
ber of comparisons that must be made by the DM in order to be able to calculate ap-
propriate weights. For methods based on paired comparisons of m items, m(m-l)/2
comparisons are ideally required. In sorting methods using simple linear algorithms
such as the standard exchange method ("Bubble Sort") the user can expect a maximum
m2/2 comparisons (exchanges) and an average of m2/4. (For details of computer sor-
ting methods see Knuth (1973) or Lorin (1975).) In the method presented above we
assume that there is an initial feasible ordering of all the items. At each stage
the DM is presented with the binding pairs - whose present paired order determines
the complete order. A switch of one such pair creates another feasible order, and
the DM is presented with the next set of binding pairs. However, it remains to be
seen whether the effect of increasing m and n increases the number of paired judge-
ments beyond those that the ]]V[ can be reasonable expected to make.
The simulations were carried out for m = (3,4,5,6,8,10,12,14,16,20) and for
n = (3,4,5,6). In each case the basic data was generated using random numbers be-
tween 0 and 1. The matrix A was formed by appropriate subtractions. In Fig. 2 and
Table 1 we present the estimate of the number of feasible orders for m=5,6,7,8,9
and the proportion of feasible orders to the total possible number of orders (m!),
for the case where n = 3.

mean no. of proportion of

feasible orders feasible orders
m (k) (k/m! )

5 19.2 0.16

6 40.0 0.043

7 43.3 0.0086

8 90.7 0.0023

9 259.5 0.00072

Table 1: Number and Proportion of Feasible Orders of m Items

with 3 Characteristics

N number proportion P

estimate of
no of feasible
200 .2
orders: N

proportion of
orders which
are feasible: P

100 .1

10 m

~igure 2; Estimate of the number and proportion of feasib~e orders


Since the proportion of feasible orders is very small, the analysis of Jb was
carried out as follows:
1) Starting from a feasible (lexicogt:>aphic) order (/1' the tight constraints
of '\ were evaluated.
2) A random choice was used to move to an adj acent space. ~ was analyzed
and a further random choice was made.
At each stage we noted the number of binding constraints for each order and
the number of LP's that needed to be solved in order to identify the space. (The
number of binding pairs corresponds with the numbers of hyperplanes which bound the
space tn.)
The results are presented in Figs. 3 and 4 and Tables 2 and 3. As can be
seen from Fig. 3, the average number of binding constraints for tn tails off below
the value n + 1 as m approaches 16 to 20. Even for n = 6 and m = 20 we would ex-
pect the J]VJ to have to peruse only 7 pairs of items and to switch one pair at each
iteration. This feature is very advantageous since such data can easily be pre-
sented on the screen of a computer terminal.

~ 3 4 5 6

3 1.61 1. 72 1.87 1.99

4 2.21 2.49 2.72 2.78

5 2.58 3.10 3.40 3.63

5 2.86 3.36 4.00 4.31
8 3.19 4.19 4.83 5.22
10 3.36 4.35 5.34 6.08

12 3.41 4.70 5.63 6.19

14 3.45 4.76 5.58 6.77

16 3.51 4.91 5.82 6.75

20 3.41 4.94 5.79 7.07

Table 2: No. of binding constraints for each A space

Number of
binding constraints






6 10 12 14 16 18 20 m

Figure 3; Average number of binding constraints for the space 1\J


~n 3 4 5 6
. m~

3 1.87 1.93 1.96 1.99

4 2.62 2.78 2.88 2.86

5 3.31 3.41 3.50 3.68

6 3.80 4.11 4.01 4.17
8 5.29 5.04 4.93 4.99
: 10 6.69 6.40 6.15 6.04
I 12 8.50 7.82 7.34 6.89
I 14
10.36 9.36 8.98 8.72
16 12.09 11.17 10.65 10.09
I 20 16.17 14.25 14.23 13.60

Table 3: No. of LP's reguired for each A s12ace

For m ~ 6 the number of LP's that must be solved at each stage seems to vary
linearly with m, the number required for n = 3 being 1 or 2 greater than that re-
quired for n = 6. The increased size of these LPs for increasing m and n does leng-
then the time needed for each iteration but this is still very small.
The operational inference that can be drawn from these results is that if a
DM wishes to rank a large number of items, it should be sufficient to select 20-30
representative items for the interactive analysis in order to identify the approxi-
ma.te A space. Once this is identified, the remaining items can be ranked accordingly.



14 n-5



8 10 12 14 16 18 20 m

Figure 4; Average number of LP' s for each tn



'Ihe method for interactive ranking presented in this paper can be used in any
situation where either the rank order or the associated utility weights (or both)
pre of interest. In the ordering of R&D proj ects, Investment Opportunities, Per-
sonnel, etc. the ]]V[may be interested in both the order and the weights associated
with his policy making decision*. The weights might be needed in order to construct
a bonus system based on work performance which will be compatible with a particular
multi-criterion ranking; they might be needed to construct an incentive index for
Government-sponsored R&D projects based on the contribution to national objectives
and on agreed multi-criterion ranking of selected projects, etc. etc.
The method can also be used as an aid in the analysis of decision making, fin-
ding appropriate weights (or weight intervals) for further multi-criterion optimi-
sation (see e.g. Steuer (1975)) etc. The method provides an alternative approach
to Delphi and other interrogative methods for evaluating the preferences of a deci-
sion maker or decision making group.


In this paper we have presented an interactive method for the ranking of items
with multiple attributes, and estimating the weights for the associated linear uti-
lity function. Simulation results suggest that in cases where the number of attri-
butes is 6 or less, a maximum of 20-30 items need be considered in order to obtain
a reasonable estimate of the appropriate weight space.

* Where the ultimate aim is the allocation of scarce resources, ranking alone
will not provide the optimal allocation. other methods of allocation (such as li-
near or mixed-integer programning) will have to be used. 'Ihe ranking stage can be
used to great advantage in limiting the range of allowable weights for any second
stage allocation.


J.L. Cochrane and M. Zeleny (eds.), Multiple Criteria Decision ~, University

of South Carolina Press, Columbia, South Carolina, 1 73 .

J. P. Evans, "Connectedness of the Efficient Extreme Points in Linear Multiple Ob-

jective Problems", Graduate School of fusiness Administration and Ope-
rations Research and Systems AnalYSis Curriculum, University of North
Carolina, Chapel Hill, (JUly 1972).

J.P. Evans and R.E. steuer, "A Revised Simplex Algorithm for Linear Multiple Objec-
tive Progr-ams", Mathematical Progr'?llJll:ing, Vol. 5, No.1, pp. 54-72,
(1973) .

D.E. Knuth, 'Ihe Art of Conputer Pro~ Vol. 3, Sorting and Searching, Addison
Wesley Publishing Co., (1973).

J .S.H. Kornbluth, "Duality, Indifference and Sensitivity Analysis in Multiple Ob-

jective Linear Progr-amning", Operations Research Quarterly, Vol. 25,
pp. 599-614, (1974).

H. Lorin, Sorting and Sort Systems, Addison Wesley Publishing Co., (1975).

J. Philip, "Algorithms for the Vector MaJdmization Problem", Mathematical Progr-am-

~ Vol. 2, pp. 207-229, (1972).

B. Roy, "Problems and Methods with Multiple Objective Functions", Mathematical Pro-
gramming, Vol. 1, pp. 239-266, (1971).
V. Srinivasan and A.D. Shocker, "Linear Progr-amm1ng Techniques for Multi-d:imensional
Analysis of Preference", Psychometrika, Vol. 38, No.3, pp. 337-369,
(September 1973).

V. Srinivasan and A.D. Shocker, "EstinRting the Weights for Multiple Attributes in
a Conposite Criterion Using Pairwise Judgements", Psychometrika, Vol.
38, No.4, pp. 473-493, (December 1973).
R.E. Steuer, "Interval Criterion Weights Programming", University of Kentucky (1974).
M. Zeleny, "A Concept of Conpromise Solutions and the Method of the Displaced Ideal",
Conputers and Operations Research Vol. 1, pp. 479-496, (1974).
S. Zionts and J. Wallenius, "An Interactive Progr-amning Method for Solving the
Multiple Criteria Problem", Management Science, Vol. 22, No.6, pp.
652-663, (February 1976).

S. Zionts and J. Wallenius,' "Identifying Efficient Vectors: Some 'Iheory and Com-
putational Results", Worldng Paper, No. 257, School of Management,
State University of New York at fuffalo, (November 1976).



'lhe relationship between n and 'n is not necessarily unique. If any of the

weights Ai are zero, or if according to a particular series of weights there is a

"tie" in a particular order, the space i'U can relate to rnre than one order. If

such ties do occur, we will assume that it is possible to perturb the weights slight-
ly so as to el:imina.te the tied position. We will also asaune that Ai > 0 for all

i, or that for a case where Ai = 0, the same order is preserved as if \ has the

value E (arbitrarily small, but positive). 'lhis is implicitly assumed in the proofs
that follow.

In order to complete the mathematical exposition we need to prove that given

a set of m items, the set of feasible orders is a connected set and that the adja-
cent elements in this set differ by the relative positions of just two adjacent
items. Whilst the conditions for a feasible order given in (4.1) - (4.3) are simi-
lar to those of an efficient solution in multiple objective linear programming we
will prove the connectedness directly. For a comparison the reader is referred to
J.P. ~s (1972).
Notations and Definitions

1) An(j) = xn(j) - xn(j+l) are rows of the matrix An for an order n, and
j=I. •• m-I.
2) for any j, k (not necessarily adjacent).
3) 0 will be defined as a domination orner if for all elements j, k E 0
j > k + Xj - ~ ~ o.
(0 is a domination order implies that all the elements of An are greater than

or equal to zero and that ( 4.1) - (4. 3) are satisfied for all A ~ 0.)
4) n will be an infeasible order if any row of An has only strictly negative
elements. In this case there is no A satisfying (4.1) - (4.3).
Lemma 1: n is a feasible order if
1) either n is a domination order, or
2) 3 at least one adjacent pair (j,k) s.t. the constraint Ajk • A ~ 0 is an
active constraint in (4.1) - (4.3).
Proof: If 0 is a feasible and is not a domination order, (4.1) - (4.3) are not
satisfied by all A, A ~ 0, EAi = 1. Furthermore, there exists at least one
row of An with both positive and negative elements. One such row IIRlst be
an active constraint.

Lemma 2: n= , k, t, m, n, ) feasible and non-trivial, (t,m) cor-

responds to an active constraint implies that
1) n' = ( , , k, m, t, n, ) is also feasible and (m,t) corresponds
to an active constraint, and
2) ~ and 1\0, have a cOllIllOn boundary.
Let ~ = {AIAn.A :.. 0, EAi = 1,A l :" O}
A" is unchanged if we add the constraints: A... A > 0, (i,j) (k,m), (t,n)
.. lJ -
since these are redundant for the order n.
If the constraint (t,m) is active, the LP
minz tm = Atm·A

A £ 1\0 (A.l)
Aij.A> 0 (i,j) = (k,m), (t,n)
has an optimal solution A*, s.t. z ~m = Atm.A* = O.
Clearly A* is also feasible and optimal for
max Atm.A and min Atm.A

s.t. Atm.A = °
Aij.A :.. ° (i,j)
~(k,t), (m,n), (k,m), (t,n)

EA. = 1
L(n(p), n(p+l)), p =l ... m-l, n(p) t- t
A. > 0 (A.2)

Replacing max Atm.A with min - Atm.A; Atm.A = 0 with - Atm.A :.. 0; and noting
that Amt = xm - xt = -A~,m' we see that A* is feasible and optimal for
min Amt·A
~.A >0
_{(k,t), (m,n), (k,m), (t,n)
Aij .A > 0 (i,j)
L (n( p), n(ptl) p=l ... m-l, n(p)t- t

If n' = , k, m, t, n, ,) the constraints of (A.3) are
simply ~, = {AIAn,.A :.. 0, EAi = I, Ai:" O} with the addition of the (now)
redundant constraints (k, t) and (m,n).
To complete the proof we note that:
i) A* feasible for (A.3) + n' is a feasible order.
ii) Amt.A* = 0 +(m,t) corresponds to an active constraint.

iii) the constraint (t,m) or (m,t) is the cOllIlX)n boundary of -'h and -'hI'
Theorem: The set of feasible orders is a connected set whose adjacent elements
differ by the relative position of just two items,
Proof: Let A = {A I 1:Ai = 1, Ai ~ O}

1) If the only feasible order is the domination order (or equivalent ties)
the set contains just one element,
2) If not there exists a feasible order Ill' By Lemna. 1 there exists at least

one pair (j,k) corresponding to a tight constraint in All (i) ,A ~ 0, By

induction it can be shown that either the sets {-'h} identified thus far
form the desired partition of A, or there exist a pair (j ,k) of some fea-
sible order which can be switched to give a new feasible order, The pro-
cess continues until the set A is co:v:ered cO!!illetely,
Sang M. Lee
The University of Nebraska-Lincoln

Goal programming has received a great deal of attention during
the past several years as a management decision making tool for prob-
lems that involve mUltiple conflicting objectives. One area of goal
programming which requires further development is the integer solution
methodology. The purpose of this paper is the development and demon-
stration of interactive integer goal programming methods for mUltiple
objective problems through a real-world application example. The
interactive approach allows not only a simple way to derive an integer
solution to the problem but it also provides a complete sensitivity
analysis of the model.

In today's complex organizational environment, the decision maker
is regarded as one who attempts to achieve a set of objectives to the
fullest possible extent in an environment of conflicting interests,
incomplete information, and limited resources [31]. The soundness of
decision making is measured by the degree of organizational objectives
achieved by the decision. Therefore, recognition of organizational
objectives provides the foundation for decision making. Decisions are
also constrained by environmental factors such as government regula-
tions, welfare of the public, and long-run effects of the decision on
environmental conditions (i.e., pollution, quality of life, use of non-
renewable resources, etc.). In order to determine the best course of
action, therefore, a comprehensive analysis of mUltiple and often con-
flicting organizational objectives and environmental factors must be
undertaken. Indeed, the most difficult problem in decision analysis
is the treatment of multiple conflicting objectives [30].

One of the most promising techniques for multiple objective deci-

sion analysis is goal programming. Goal programming is a powerful
tool which draws upon the highly developed and tested technique of
linear programming, but provides a simultaneous solution to a complex
system of competing objectives. Goal programming can handle decision
problems having a single goal with multiple subgoals, as well as cases
having mUltiple goals and subgoals [19]. The concept was originally
introduced by Charnes and Cooper [3], [4], [5], [7], and the technique
has been further developed by Ijiri [16] and Lee [19], [20], and
others. Applications of goal programming to real-world decision prob-
lems are just beginning to be explored. Example applications include
advertising media planning [9], manpower planning [8], production
planning, [19], [23], academic planning [21], financial analysis [22],
economic policy analysis [19], transportation logistics [24], [25],
marketing strategy planning [27], environmental protection [6], and
health care planning [19].
Many researchers have studied the theory, solution algorithms,
and applications of goal programming during the past several years.
However, these studies generally accepted the conventional divisi-
bility requirement of linear programs. The conventional integer lin-
ear programming algorithm can be applied to a problem if there exists
no conflict among the multiple objective sought in the model. However,
if the objectives are in conflict and the goal programming model re-
quires preemptive priority weights to analyze the problem, it requires
an integer goal programming procedure. This paper presents integer
goal programming algorithms based on the cutting plane (all-integer
case), branch and bound (all- and mixed-integer cases), and implicit
enumeration (zero-one case) methods. These algorithms are effective
in analyzing any goal programming problem regardless of the conflict
and commensurability conditions of the objectives.
The true value of any operations research tool is measured by its

applicability to real-world problems. The applicability of techniques

usually requires computer-based analysis. Thus, this paper represents
an interactive mode which facilitates an interaction between the deci-
sion maker and the model via a computer terminal. The interactive
mode makes it possible to obtain not only an integer solution to a
given problem but it also allows a complete sensitivity analysis
(changes in the priority structure for the goals, levels of goals, and
technological coefficients) for the model.


In many practical decision problems with mUltiple conflicting
objectives, the decision variables make sense only if they assume non-
fractional or discrete values [11]. The decision variables in this
situation might be people, crews composed of various personnel and
equipments, assembly lines, indivisible investment alternatives, con-
struction projects, or pieces of equipment.
In this section, we shall discuss three integer goal programming
algorithms: the cutting plane method for all-integer goal programming
problems; the branch and bound method for all-integer goal programming
problems; and the implicit enumeration method for zero-one goal pro-
gramming problems. (Some parts of this section are based on a paper,
S. M. Lee and R. Morris [26]. Several solution examples of integer
goal programming methods can be found in the referenced paper.)
The Cutting Plane Method
The cutting plane method of integer goal programming is adapted
from Gomory's [12], [13], [14], [15] methodology for the general
integer linear program. The basic solution procedure can be summa-
rized as follows:
Step 1: Solve the model by the ordinary modified simplex method of
goal programming.
Step 2: Examine the optimal solution. If all the basic variables
have integer values, the integer optimal solution is derived

- Stop. If one or more basic variables have fractional or

continuous values, go to step 3.
Step 3: Develop a cutting plane and find a new optimal solution by
the modified simplex method. Go to step 2.
The primary difference between the Gomory approach and the cut-
ting plane goal programming method lies in the treatment of the multi-
dimensional priority weights. The cutting plane can be developed
easily by following the steps described below. In the optimal simplex
tableau of ordinary goal programming, choose the row vector where the
basic variable has the largest fractional value. If we denote this
basic variable by Xi (for convenience this variable shall represent
all decision and deviational variables) and the nonbasic variables by
X. (again this includes all decision and deviational variables), the
equation for the vector is

n *
X. - l: a .. X. (i=l,m)
l j=l lJ J

where, a *
ij coefficients of the ith row and jth column in the final
simplex tableau
i rhs value of Xi in the final simplex tableau.
If we denote the nonnegative fractional part of b*i by f(b "'I(i ) and the
nonnegative fractional part of a~j by f(a~j)' the equation can be
rearranged as

nL * J. = f(b.)
f(aiJ.)X * + (integer), (i=l,m)
j=l l

Then, since all the variables must take nonnegative integers, we can
formulate the cutting plane constraint as

By adding deviational variables, we can derive the cutting plane equa-

tion as

In order to satisfy the inequality condition of (3), the negative

deviation (d:) should be minimized. Since the rhs value, f(b *i ), is

positive the solution will be feasible. Thus, there is no need for

the dual simplex method. After assigning the super priority
Po (Po»>Pk ) to the negative deviation, the regular modified simplex
method can be applied to derive a new solution.
The Branch and Bound Method
Integer programming problems quite frequently have upper and/or
lower bounds for their decision variables. Since the bounded integer
programming problem has a finite number of feasible solutions, an
enumeration procedure for searching an optimum solution is a sensible
approach. The branch and bound algorithm first suggested by Land and
Doig [17], [18], can also be adapted to an integer goal programming
The basic idea of the branch and bound method of goal programming
can be summarized as follows:
Step 1: Solve the model by the ordinary modified simplex method of
goal programming.
Step 2: Examine the optimal solution. If the basic variables that
have integer requirements are integer valued, the integer
optimal solution is obtained - stop. If one or more basic
variables do not satisfy integer requirements, go to step 3.
Step 3: The set of feasible solutions is branched into subsets (sub-
problems). The purpose of branching is to eliminate contin-
uous solutions that do not satisfy the integer requirements
of the problem. The branching is achieved by introducing
mutually exclusive constraints that are necessary to satisfy
integer requirements while making sure no feasible integer
solution is excluded.
367 .

Step 4: For each subset, the optimal value of the objective function
(degree of goal attainment, Uk) is determined as the lower
bound. The optimal Uk of a feasible solution which satisfies
integer requirements becomes the upper bound. Those subsets
having lower bounds that exceed the current upper bound must
be excluded from further analysis. A feasible solution
having Uk which is as good as or better than the lower bound
for any subset is to be found. If there exists such a solu-
tion it is optimal - stop. If such a solution does not
exist, a subset with the best lower bound is selected and go
to step 3.
As in the cutting plane method, the added constraint will be sat-
isfied by assigning the super priority factor Po to the minimization
of the appropriate deviational variable.
Implicit Enumeration Method
The implicit enumeration method is developed to solve the goal
programming problem which requires either zero or one values for the
decision variables. The method is basically a combination of Balas'
[1], [2] additive algorithm and Glover's [12] backtracking procedure.
The solution combinations are evaluated by introducing one decision
variable at a time. vfuen no further variable can be added to improve
the solution beyond the current optimal solution, then a backtracking
technique is instituted to evaluate other combinations in a systematic
fashion. The optimal zero-one solution is the upper bound solution
when all possible combinations are evaluated.
The basic solution procedures can be summarized as follows:
Step 1: Set all variables as free variables (they are implicitly
equal to zero and not in the solution).
Step 2: Evaluate all free variables and select the one which will
improve the solution to the greatest extent when a value of 1
is assigned to this variable. Enter it into the solution.

Go to step 4. If no such variable exists go to step 3.

Step 3: The solution is "fathomed." The backtracking procedure is to
be initiated. The last variable added to the solution with a
value of 1 is made 0 but it is kept in the solution set (not
a free variable).
Step 4: Evaluate the solution. If all solution variables have values
of zero, the entire enumeration procedure is completed. Stop
and find the best solution set (in terms of goal attainment).
Otherwise, go to Step 2.


One of the primary characteristics of the application of goal
programming is the concept of an ordinal solution based on preemptive
priority weights which are assigned to multiple conflicting objectives.
It is necessary to analyze the trade-offs among the objectives when
their priority structure is changed due to a changing decision envi-
ronment. The best approach to analyze such a problem appears to be an
interactive mode where the decision maker and the goal programming
model interact via a computer terminal. The same interactive approach
can be equally effective for changing goal levels (b i ) and technolog-
ical coefficients (a ij ) , addition or deletion of constraints, and
addition or deletion of decision variables. Thus, the interactive
approach can provide an instant analysis of the effect of changes in
model parameters as well as a complete sensitivity analysis of the
optimal solution. The interactive goal programming approach provides
a systematic process in which the decision maker seeks the most satis-
factory solution. The process allows the decision maker to reformu-
late the model and systematically compare the solutions in terms of
their achievement of multiple objectives.
The interactive system is installed on IBM 370-158 and consists
of a control program (CP) and the conversational monitor system (CMS).
The system allows the user to create and edit files (programs,

problems, etc.) and to execute programs conversationally. The inter-

active goal programming is based on the modified simplex method of Lee
[19, Chapter 6]. Consequently, once the preliminary logon procedure
is complete, the input format is exactly the same as the regular goal
programming input required by Lee's program.
The interactive approach of goal programming developed by Lee
[19] for the sensitivity analysis purpose can be applied to integer
goal programming. The interactive integer goal programming requires
only slight changes in the program which can be easily accommodated by
the file editing procedure. Thus, the interactive integer approach
can be used not only for deriving an integer solution but it can also
be utilized for sensitivity analysis of the integer problem. All three
integer programming methods discussed in section 2 can be augmented
on the system.


The three integer goal programming algorithms presented in section
2 can be applied to a wide variety of management decision problems.
In this section an interesting real-world problem will be presented as
an application example. Because of the space limitation, only the
formulation and solution of one simplified model will be presented.
Nebraska State Patrol Allocation Problem
One of the most difficult tasks of state highway administrators
is allocation of manpower; i.e., determination of the most effective
level of operational manpower for patrol tasks. The primary function
of a state's highway traffic law enforcement group is to patrol the
state's road system. In this capacity the troopers enforce traffic
laws, act as a deterrent to traffic violators, and assist motorists.
Except where interstate freeways extend through urban areas, patrols
are limited to roadways outside city and township boundaries.
The Nebraska State Patrol currently maintains a force of 264

uniformed patrolmen, divided among six troops or divisions within the

state. Although the force has the authority to exercise and enforce
all the laws of the state, they enter into city jurisdiction only when
called upon by local authorities. Their primary function is to patrol
all unincorporated roadways. Each workday is subdivided into a daytime
and evening shift. Daytime troopers work a ten-hour shift which may
begin at either 6:00 AM to 7:00 ~~, and conclude at 4:00 PM or 5:00 PM
respectively. Evening enforcement also consists of staggered ten-
hour shifts: 4:00 PM to 2:00 AM or 5:00 PM to 3:00 AM. The over-
lapping shifts are designed so as to provide extra coverage during the
evening rush hours of 4:00 to 5:00 PM. A single patrolman from each
troop will handle a 3rd shift for all the the road segments in the
region during the hours from 3:00 ~~ to 6:00 AM. From 6:00 ~~ to
1:00 PM this patrolman joins the regular day shift activities, as
shown in Figure 1. Each patrolman works a 5 day 50 hour week.
Within each division patrolmen are assigned to specific road
segments. Assignments are made on a single patrolman per car basis.
The number of cars assigned to each road segment and the size of the
patrol area are specified by the troop administrators. The assignments
will be based on the time of day and on the "danger" level for each
road segment. An accident prediction formula, based upon accident
frequency and traffic density, will be used to determine the danger
The model in this study will be limited to the allocation of
patrolmen to the road segments within a given region in Headquarters
Troop area of the Nebraska State Patrol based in Lincoln, Nebraska.
Figure 2, a detailed map of Headquarters Troop, deliniates the 6 road
segments which this model will consider. This region has been chosen
because of its relative traffic density and accident level in comp-
arison with the more rural sections of the Headquarters Troop area.
Twenty-two men are currently available for assignment to this region.
3:00 6:00 7:00 1 :00 4:00 5:00 2:00 3:00
! ! ! ! ! ! !
X1 :X2: X3 X,,--;-xsT--- Xs X7
! i
i :
: w


Headquarters Troop
(with major road segments)


This model will explore both the optimal placement of these 22 men,
as well as recommendations for the implementation of a Selective
Traffic Enforcement program consisting of a maximum of three troopers
which can be utilized in addition to the existing Troop officers.
Model Variables
the number of patrolmen assigned to road segment "i"
during shift "j" where j = 1,2, ... ,7; i = 1,2, ... ,6.
Specific variable assignments related to the six road segments within
the given region of Headquarters Troop are as follows:
u.S. 77 from Lincoln to Wahoo (27 miles)
u.S. 77 from Lincoln to Beatrice and Nebraska 33 to
Crete (52 miles)

Nebraska 2 from Lincoln to Syracuse (30 miles)

u.S. 6 from Lincoln to Ashland (27 miles)
u.s. 34 from Lincoln to Seward and Nebraska 79 from
Lincoln to Highway 92 (50 miles)
u.S. 34 from Lincoln to Union corner (33 miles)
These road segments represent the greatest nonfreeway traffic
flows within the region. Allocation to a specific road segment, how-
ever, is not confined to the single roadway but includes secondary
roads connected to and in the vicinity of the major road segment.
Typically, a patrolman will cover a portion of a major road segment
and a portion of secondary roads. Traffic flows on the secondary
roads are assumed to be a function of flows on the major road segment.
Although there are seemingly 3 major shifts to consider, these
shifts can be divided into 7 different shift segments in order to
consider the overlapping of shift times. For the "i" road segments
the shift can be defined as follows:

Xil 3:00 AM to 6:00 AM

Xi2 6:00 AM to 7:00 AM

Xi3 7:00 AM to 1:00 AM

Xi4 1:00 PM to 4:00 PM

XiS 4:00 PM to 5:00 PM

Xi6 5:00 PM to 2:00 &~

Xi7 2:00 AM to 3:00 AM

Model Constraints
The following model constraints can be formulated.
1. Total Patrol Constraint
Since 22 patrolmen are assigned to the given region of Head-
quarters Troop, and given that 5/7 of the force can be assumed to be
working any given day (considering days off and vacation time), there
are 15 full time patrolmen that can be allocated each day. From
Figure 2 it is apparent that all available troopers will be on duty
during the third or sixth overlapping shift segments, but not both.
Thus the total patrol allocation constraint each day is expressed as:
6 6 +
X. 3 + E Xi6 + dl - dl = 15
i=l ~ i=l
Overtime cannot be considered as a possible alternative to
increased law enforcement coverage in the Nebraska State Patrol be-
cause of policy restrictions. State troopers work a 50-hour week and
any extension of these hours would not increase trooper effectiveness
on the roadways. Most probably effectiveness would deteriorate due to
trooper fatigue and morale.
2. Enforcement Constraint
In order to consider the implementation of a Selective Traffic
Enforcement Program consisting of a maximum of three troopers, it is
necessary to develop a modified manpower goal constraint to determine
the optimal allocation of these additional troopers. The Selective
Traffic Enforcement Program, thus expressed as the positive deviation
from the total patrol constraint:
dt + d Z- d! = 3
3. Shift Requirements
Because one of the patrol's duties is to aid and assist motorists,
a minimal number of patrolmen should be assigned to each shift. Each

of the major portions of day and evening shifts is allocated a minimum

of 6 troopers. It is realized that they may not always have enough
patrolmen for each segment, however this assures as much motorist
assistance as the size of the force allows. The patrol requirements
are as follows:

Time Period Number Required Troopers, N.

(all road segments) J

03:00-06:00 Nl = 1
06:00-07:00 N2 = 3
07:00-13:00 N3 ?- 7
13:00-16:00 N4 = N3 - 1
16:00-17:00 Ns = (N 3 + N6) - (N 2 + N7 )
17:00-02:00 N6 ?- 6
02:00-03:00 N7 = 3

The patrolmen are assigned to report for duty at 3:00 AM, 6:00AM,
7:00 ~~, 4:00 PM, or S:OO PM. Once he reports in, however, the
trooper stays continually for a 10-hour shift. Cons taints to meet
these shift requirements and ensure assignment of 10 hours shifts are
as follows:
X' l
+ d 3 1

6 6
X' 4
1: X' 3 + d
6 d+
6 1

6 6 6
l: X.S - 1: X' 3 - 1: X' 6 + dj d+
7 6
i=l l. i=l l. i=l l.

X' 6 + d
S- d+
S = 6

1: X' 7 + d g - d+
= 3
i=l l.

4. Minimum Patrolmen Per Road Segment Per Shift

Each road segment must be monitored by a minimum number of
patrolmen. Even though the overall deterrent level may not be maxi-
mized, a minimum number of patrolmen must be assigned for each road
segment to handle emergencies and provide motorist assistance. The
minimum allocations are: zero patrolmen per road segment for shifts
1 and 2, one patrolmen per road segment for shifts 3, 4, 5, 6 and
zero patrolmen per road segment for shift 7. This results in 24 addi-
tional constraints. These are expressed as follows:

Xu ?- 1 (i 1, ... ,6)

Xi4 ~ 1 (i 1, ... ,6)

XiS ~ 1 (i 1, ... ,6)

Xi6 ~ 1 (i 1, ... ,6)

5. Maximum Patrolmen Per Road Segment

As a result of such factors as the size of the road segment areas,
local public requests for patrolmen and the like, a maximum number of
patrolmen per road segment must be established. These are as follows:
(i 1, ... ,6), (j 1,2,7)

(i 1, ... ,6), (j 3,4,5,6)

These 42 constraints limit the assignment of patrolmen to one per

segment during shifts 1, 2, and 7; and two per segment during the re-
maing shifts.
6. Traffic Density
This constraint reflects the traffic flow in each region per
shifts (expressed in millions of vehicle miles). The constraint is