Beruflich Dokumente
Kultur Dokumente
THEORETICAL NOTE
Seven features which in practice seem to differentiate belief systems from knod-
edge systems are discussed. These are: nonconsensvality, "existence beliefs,"
alternative worlds, evaluative components, episodic moterial, unboundedness,
and variable credences. Each of these features gives rise to challenging repre-
sentation problems. Progress on any of these problems within artificial intelli-
gence would be helpful in the study of knowledge systems as well as belief
systems, inasmuch as the distinction between the two types of systems is not
absolute.
The use of the term "belief system" can be highly confusing. Psychologists,
political scientists and anthropologists tend to use the term in rather different
senses. It would be fruitless to try to settle once and for all what is really meant
*This paper is a slightly revised version of a symposium talk delivered at the First Annual
Meeting of the Cognitive Science Society.
356 ABELSON
transparent feature of belief systems, and thus there are cases wehre it is not clear
whether something is a belief system or a knowledge system.
Consider so-called "cultural belief systems" (D'Andrade, 1972). If every
normal member of a particular culture believes in witches, then as far as they are
conceiiitd,lt is not abelief-system;-it is a ~owtedge~system.f h2yYknowab-outr
witches. But the anthropologist who studies this culture is aware of many witch-
less cultures, and thus uses the label "belief system" without flinching. The
other side of this same coin is that scientific knowledge can be viewed as mere
belief by those outside the community of shared presuppositions about science as
a way of knowing.
For cognitive science, the point of this little discussion is that nonconsen-
suality should somehow be exploited if belief systems are to be interesting in
their own right as opposed to knowledge systems. In an A1 belief system program
in particular, this could be done either by (a) representing the awareness of
alternatives, or (b) embodying different belief systems by different data bases.
Carbonell's (1978) PoLIncs program does both of these things to some extent.
the goal state. In the terminology of Newel1 (1969) and others, this is an "ill-
structured problem. "
4. Belief systems rely heavily on evaluative and affective components. There are
-
-. .
two aspects-to this, one'kognitive; " the other "motivational. "-First ;a-belief
system typically has large categories of concepts defined in one way or another as
themselves "good" or "bad," or as leading to good or bad. For the antinuclear
activist, nuclear power is bad, pollution is bad, callous industry and lax regula-
tion are bad, materialism and waste are bad, natural alternative energy sources
are good, conservation is good, concern about hazards to human life is good, and
so on. These polarities, which exert a strong organizing influence on other
concepts within the system, may have a very dense network of connections rare
in ordinary knowledge systems.
From a formal point of view, however, the concepts of "good" and "bad"
might for all intents and purposes be treated as cold cognitive categoriesjust like
any other categories of a knowledge system. Inferences about goodness and
badness might be governed by rules such as the "balance principle" (Heider,
1946, 1958; Abelson & Rosenberg, 1958). Without going into all the details, an
illustration of a rule following from the balance principle is: If z is good (bad),
and y harms z , then y is bad (good).
An interesting recent artificial intelligence program which refers a lot to
good and bad entities but seems rather more like a knowledge system than a
belief system, is VEGE, in unreported work at Yale by Michael Dyer. VEGE gives
advice on organic vegetable gardening much as a newspaper columnist answer-
ing readers' questions might. The balance principle is implicitly used throughout:
anything which encourages pests or other entities harming vegetables is to be
avoided, anything which discourages those entities is to be welcomed, and so on.
Thus the formal logic of chains of helping and harming (or enabling and disen-
abling) relationships can proceed perfectly reasonably in a knowledge system
having none of the features (I), (2), (3) above characterizing belief systems. Of
course if the chains of relationship are nonconsensual, seeming to follow
"psycho-logic" (Abelson & Rosenberg, 1958) rather than logic (e.g., "Only
bloodshed will lead to peace"), then we are back on the ground of prototypic
belief systems again.
When the good and bad entities for the system have motivational force
rather than simply categorical status, unique consequences for belief systems are
even more likely to emerge. By motivational force I mean that when affectively
toned entities are activated, the processes of the system are altered. Thus a
system that found some input exciting would process it more deeply, or if fearful
would avoid it, and so on. A specialized instance of this possibility occurs in the
conversational behavior of Colby 's (1975) paranoid PARRY program, which
changes rather drastically as a function of its levels of fear, anger, and anxiety.
5. Belief system are likely to include a substantial amount of episodic material
from either personal experience or (for cultural belief systems) from folklore or
BELIEF A N D KNOWLEDGE SYSTEMS 359
Belief systems have not been very popular objects of study in AI. Partly this is
because the backgrounds of most people in co'mputer science have led them to
prefer more formal problems arising from mathematics or linguistics, and partly
because belief systems present substantial modeling difficulties. The coming of
age of cognitive science has introduced to A1 a much wider range of disciplines,
BELIEF A N D KNOWLEDGE SYSTEMS 361
anthropology for example, and we can expect more motivation in the future for
studying belief systems. But the difficulties will not so easily go away, and it is
well to consider what they are.
-- Eachof _theseven distillgubhingfeatuyes of belief systems creates t_h_eore& _
cal or practical difficulties. Most of these difficulties, fortunately, produce in-
teresting opportunities for needed new developments in cognitive science. We
consider each of the seven features in turn.
Nonconsensuality poses a problem because it is inefficient to design a huge
system for a single target case. It is neither very easy nor very convincing to try
to validate a model that simulates an individual case. There is always the suspi-
cion that the model was fitted ad hoc, and there is always the sense of disap-
pointment that the model is not generalizable. The trick, therefore, is to encom-
pass individual variations within a common systemic framework, rather than
having to start all over again each time a new individual is to be modeled.
One excellent strategy for accomplishing this has been provided by Car-
bone11 (1978). In his POLITICS program for representing American foreign policy
views, virtually all of the conceptual elements are shared among belief systems
of different shades. What differs between, say, left- and right-wing views is the
network of perceived goal priorities for the major international actors. Thus the
left-winger sees the Russians as very interested in preserving world peace and not
much willing to push too hard to gain control of more temtory, whereas the
right-winger perceives the importance of these two goals to the Russians in the
opposite relative order. This simple cognitive device turns out to produce major
differences in systemic interpretation of real-world events, because the system
relies very heavily on attributions about plans and goals. How far this device can
be pushed to handle nonconsensuality in general remains, to be seen.
Existence beliefs as special cases of nonconsensuality are more trou-
blesome. If what exists for one believer does not exist for the other, and these
privileged constructs are highly central, then the common content core between
the different systems declines drastically. Any general framework between sys-
tems would then have to rely on similar processes operating on disparate con-
tents. While this is certainly conceivable, it is not totally satisfying.
Furthermore, it is a possibility that systems with quite different existence
beliefs might use different processing modes. A magician, a statistician, and a
believer in ESP might have very little in common in the ways they reasoned
about an apparent instance of mind-reading. To model the ways separately, each
in its own terms, might be instructive and fun, but would not do much to cure the
ad hoc situation characterizing the modeling of a single nonconsensual belief
system. Thus I see differences in existence beliefs as one of the most vexing of
the characteristic features of belief systems.
There has been a little bit of interest in A1 in alternate worlds problems.
The way this usually arises is that mental representations must often contain
models of other mental representations. In the simplest sort of case, knowing
whether someone else knows something may be a crucial piece of knowledge for
362 ABELSON
results when these presumptions are blatantly violated). There are more complex
cases such as those that arise in the semantic analysis of concepts such as
"promise" or "request," etc. If I hear you agree to my request to do X, then
among other things (Bruce, 1975), I may believe that you believe that I believe
that you can do X. Here there is a representation of a representation of a repre-
sentation.
In as yet unpublished work, Yorick Wilks has undertaken a systematic
analysis of the rules by which embedded representations might operate without
totally boggling the system containing them. He considers examples such as
Interlocutor telling System, "I don't think Hany likes you; but don't tell him I
told you. " Comprehension of this requires System to cognize Harry's representa-
tion of Interlocutor's representation of Harry's representation of System. (Or
something of the sort.) Fortunately for most belief system work, the alternative
worlds problem is not as complicated as this. For single beliefs one might worry
about how to deal with embedded conceptualizations such as "I believe that you
.
believe that I believe. . ,etc., " but I do not know of anyone who is concerned
with modeling a system of beliefs about a system of beliefs about a system of
beliefs. Real-world belief systems are not often tortured by sinuous regress. As a
believer I know my own values and prospects, and the values and prospects of
my enemies and my friends. Alternative worlds come down largely to alternative
goals and plans, the system vs. its enemies. Knowledge of "counterplanning"
strategies is useful, and judging by the various developments in Carbonell
(1978), Bruce and Newman (1978), and Wilensky (1978), the representation of
such strategies is tractable.
The evaluative components of belief systems give rise to the least well-
understood problems. Earlier I said that evaluations of entities as good or bad
create important cognitive categories, but that the more unique consequences of
evaluation are motivational rather than solely cognitive. While intuitively it seems
clear what is meant by "motivational," in practice the distinction between cogni-
tive and motivational is quite subtle. This is especially true in the traditions of
artificial intelligence and information processing psychology, where the habit of
mind is to treat everything in some sense as cognitive.
One can usually, perhaps always, simulate motivational phenomena by
cognitive ones. An example will clarify this assertion. Some years ago, while
first trying to simulate by computer the belief system processes (Abelson &
Carroll, 1965) that characterized, say, the foreign policy views of Barry Goldwa-
ter, we included a motivated process of evaluative inconsistency reduction.
Every time the system encountered a good actor involved in a bad action, or a
bad actor involved in a good action, it went into a trouble-shooting mode de-
BELIEF A N D KNOWLEDGE SYSTEMS 363
signed to explain away such a state of affairs. A major heuristic was rationaliza-
tion (Abelson, 1963): for example, a good actor might perform a bad action
because forced to by a bad actor, or because the bad action was only temporary,
leading to a greater go-odjn-the long run. Another heuristic was-denial,-whereby
-
the system refused to accept that the good actor was performing a bad action.
These heuristics all seemed quite realistic. Later I realized, however, that incon-
sistency resolutions need not be "computed" time after time. The system can
simply store an acceptable resolution to every familiar dilemma. What once may
have been a motivated process becomes frozen into a cognitive package. Thus if
we told the Goldwater Machine that the United States was harming innocent
Vietnamese civilians, and it replied that this was in the service of freedom, an
observer could not tell if this response was a motivated rationalization cooked up
on the spot, or a well-rehearsed answer given entirely without either anguish or
thinking. If alternatively the system denied the possibility that the United States
was harming innocent civilians, an observer could not distinguish this as a "hot"
denial process from a cold search through an established conceptual taxonomy in
which such an event is simply not recognized because there is no script or plan
which could plausibly contain it. The response itself does not reveal its own
motivational origins.
There is presently roiling an interesting debate in social psychology around
this very issue. In "attribution theory" (Jones et al., 1971), the study of how
people assign causal responsibility for observed events, there is a controversy
about whether so-called "defensive attribution" exists. That is, are there special
processes by which people defend themselves against unwanted blame for unfor-
tunate events, or do seemingly defensive phenomena arise merely as a side-effect
of normal habits and categories of mind (Nisbett & Ross, 1979)? Recently,
Tetlock and Levi (1980) have argued that there is no definitive way to settle this
argument.
Because of the undecidability between hot and cold origins for any given
response, it is tempting for the model-builder to ignore emotional explanations in
favor of cognitive explanations. It is more parsimonious and it is more congenial
to people trained to study cognition. But I, along with Donald Norman (in press),
have the growing sense that this is a mistake, that by so doing we might be
closing off several generations of potential developments in cognitive science.
While an output response itself may not reveal its genesis, the manner in which
the response is given (e.g., its speed), and the history of the response over
repeated exposure to appropriate activators may be very sensitive to its internal
design. Evaluative or affective responses may not in fact simply be the same as
cognitive judgments of good and bad. The social psychologist Robert Zajonc
(1979) has presented seemingly persuasive evidence that affective responses to
stimuli can occur prior to and independent from later cognitive responses.
Briefly described, his prototypic demonstration is as follows: Subjects are
shown a number of members from a set of unfamiliar stimuli (Japanese ideo-
364 ABELSON
graphs or Turkish words, say). Some members are displayed often, some rarely
under rapid exposure conditions making it difficult for subjects to articulate how
often each is seen. Zajonc (1968) had previously demonstrated under standard
conditions
-- - -- - -exposure~effect
a- "mere - - on liking-previously novel-stimuli seen _
often are judged more pleasant than those seen once or twice. In the critical
demonstrations under rapid exposure, subjects might not be able to recognize
consciously which of two symbols they had seen more often, but could neverthe-
less reliably state a preference for the one they had indeed seen more often.
Preference without recognition is attributed by Zajonc to a fast, crude, affective
response system using cues picked up before the slower cognitive system reads
the stimuli. He bolsters his position by the functional argument that animals have
a need for fast responses to threats and opportunities in the environment without
waiting for careful stimulus categorization. Rabbits run from apparent hawks
which closer inspection would often show to be nonhawks.
This provocative dual process theory seems certain to become a focus for
psychological research and debate. In any case, the ability of belief systems to
stir and express the passions of believers is an essential feature not to be found in
knowledge systems, well worth our groping theoretical efforts to try to under-
stand it. Indeed, it is at the root of criticisms of A1 programs, sounded by Searle
(1980), by Dreyfus (1979), and by others1 that programs lack passions and
purposes of their own. Of course these critics claim that programs are intrinsi-
cally incapable of anything resembling feeling. To what degree and in what sense
this claim is true or false remains to be seen. But it will not be seen if we don't try
to look.
How to deal with episodic material is another very crucial and very hard
problem. As noted, A1 programs tend not to incorporate episodic knowledge.
Partly this is because it is unclear how to integrate episodic with "semantic"
information, and partly because to do it right, one ought to incorporate episodic
material historically, as it arises. But this involves "growing" a program, rather
than giving birth to it whole, and one is confronted again with the same ineffi-
ciency that characterizes the study of nonconsensual systems. Nevertheless, it is
essential to cope with episodic knowledge in order to have, among other things,
an adequate theory of memory. Schank's (in press) recent theorizing about mem-
ory takes the position that being reminded of past episodes is the essence of the
understanding of present episodes. Higher-level knowledge structures such as
scripts and plans serve as spines to which episodes are attached. No program,
either knowledge system or belief system, has yet exercised such a scheme, but it
seems a promising area for development.
I have nothing very encouraging to say about the problem of unbounded-
ness for personal belief systems, although for political or cultural belief systems
in circumscribed areas, it seems possible to creatq bounded small worlds for
modeling purposes. I do not know of any theoretical work which would guide us
'Weizenbaum (1976).
BELIEF A N D K N O W L E D G E SYSTEMS 365
REFERENCES
Bruce, B. Belief systems and language understanding. A1 repon Number 21. Bolt, Beranek and
Newman. January 1975.
Carbonell. J. G. Jr. POLITICS: Automated ideological reasoning. Cognitive Science, 1978.2. 1-15.
Colby, K. M. Simulations of belief systems. In R. C. Schank and K. M. Colby (eds.) Computer
.
.
~ - ~ . - - m o d e ~ . . o f - t h o U g h t t a n d n d9 )[ a n- g u o- g - - - - - - -
Colby. K. M., Tesler, L. & Enea, H. Experiments with a m h algorithm on the data base of a
A1 Project memo AI-94, August, 1969.
Colby, K. M. Artificial paranoia: A computer simulation of'l~uranoidprocesses. New York: Perga-
mon, 1975.
D'Andrade, R. Cultural belief systems. University of California at San Diego. Manuscript prepared
as a report to the National Institute of Mental Health Committee on Social and Cultural
Recesses, November 1972.
Dreyfus, H. L. Whar computers can't do: The limits of artificial intelligence. (Revised Edition). New
York: Harper, 1979.
Heider, F. Attitudes and wgnitive organization. Journal of Psychology, 1946,21, 107-112.
Heider. F. The psychology of interpersonal relations. New York: Wiley, 1958.
Kanter, R. M. Commitment and community: Communes and utopias in sociological perspective.
Cambridge. Mass. Harvard University Press, 1972.
Jones, E. E., Kanouse, D. E., Kelly, H. H., Nisbett, R. E., Valins, S. & Weiner. B. Attribution:
Perceiving the causes of behuvior. Morristown: General Leaming Press, 1971.
Meehan, J. The metanovel: Writing stories by computer. Ph.D. dissertation. Yale University, 1976.
A. Heuristic programming: Ill-structured problems. In J. Aronofsky (4s.) Progress in
operations research, Vol. Ill. New York: Wiley, 1969.
Norman, D. Twelve issues for cognitive science. Cognitive Science, in press.
Rosch, E. & Mervis, C. B. Family resemblances: studies in the internal structure of categories.
THEORETICAL NOTES
THEORETICAL NOTE