Sie sind auf Seite 1von 16

COGNITIVE SCIENCE 3, 355-366 (1979)

THEORETICAL NOTE

and Knowledge Systems*


P. ABELSON
ROBERT
Yale Universiry

Seven features which in practice seem to differentiate belief systems from knod-
edge systems are discussed. These are: nonconsensvality, "existence beliefs,"
alternative worlds, evaluative components, episodic moterial, unboundedness,
and variable credences. Each of these features gives rise to challenging repre-
sentation problems. Progress on any of these problems within artificial intelli-
gence would be helpful in the study of knowledge systems as well as belief
systems, inasmuch as the distinction between the two types of systems is not
absolute.

As long ago as 1963 I wrote a paper entitled, "Computer simulation of hot


cognition" (Abelson, 1963), and in 1965, "Computer simulation of individual
belief systems" (Abelson & Carroll, 1965). These papers represented an interest
which was then quite idiosyncratic. In 1979, this interest is still unusual, but with
the advent of cognitive science there is now a place for the study of belief
systems in relation to other aspects of human cognition and human affect. The
study of belief systems is intimately related to work on knowledge systems, but
has enough unique features to justify it as a separate topic.
In this paper I will proceed as follows: First, I will set forth what I see as
the differences between a "belief system" and a "knowledge system," at the
same time acknowledging that they have many points in common. Then I will
explain why from an. artificial intelligence standpoint it is difficult to construct
belief system models, but nevertheless is important. Also I will outline what
kinds of developments are in progess or are still lacking in the modeling of belief
and knowledge systems.

A. THE DISTINGUISHING FEATURES OF "BELIEF SYSTEMS"

The use of the term "belief system" can be highly confusing. Psychologists,
political scientists and anthropologists tend to use the term in rather different
senses. It would be fruitless to try to settle once and for all what is really meant
*This paper is a slightly revised version of a symposium talk delivered at the First Annual
Meeting of the Cognitive Science Society.
356 ABELSON

by "belief system. " I do not propose here to attempt an analytical philosophical


analysis. Instead, I will review some of the work on belief systems (without
attempting a complete survey) in an attempt to identify features which in practice
seem to be distinctive-about beliefsand belief systems. I emphasize distinctive
features because belief systems have much in common with knowledge systems,
and were there no distinctive features at all then modeling belief systems would
be just like modeling knowledge systems and would need no special structures or
processes.
Imagine, then, a stored body of structured knowledge. There is some
network of interrelated concepts and propositions at varying levels of generality,
and there are some processes by which a human or a computer accesses and
manipulates that knowledge under current activating circumstances andlor in the
service of particular current purposes. What features warrant calling this stored
body of concepts a belief system? I see seven possibly important conditions. In
the spirit of the modem emphasis on definition by prototype (Smith, Shoben &
Rips, 1974; Rosch & Mervis, 1975) rather than by necessary and sufficient
conditions, none of these seven features are individually definitive. In fact they
vary somewhat in the degree to which they distinguish belief systems from
knowledge systems. Any system embodying most of them, however, will have
the essential character of a "belief system":
1 . The elements (concepts, propositions, rules, etc.) of a belief system are not
consensual. That is, the elements of one system might be quite different from
those of a second in the same content domain. And a third system different from
both. Individual differences of this kind do not generally characterize ordinary
knowledge systems, except insofar as one might want to represent differences in
capability or complexity. Belief systems may also vary in complexity, but the
most distinctive variation is conceptual variation at a roughly comparable level of
complexity. Consider some societal problem area like, say, the Generation Gap.
Youngsters may have a highly articulated system of concepts blaming the prob-
lem on adult restrictiveness and insensitivity, whereas oldsters develop concepts
around adolescent rebellion and immaturity. Meanwhile, psychologists may
view the matter in terms of communication failure between generations.
An interesting sidelight on the consensuality question is whether a belief
system is "aware," in some sense, that alternative constructions are possible.
Semantically, "belief" as distinct from knowledge carries the connotation of
disputability-the believer is aware that others may think differently. From an
artificial intelligence perspective, however, the issue of such awareness (and the
problem of how to represent it) seems separable from the design of the belief
system itself. And one can imagine belief systems, such as the very young child's
construction of the life and works of Santa Claus, which are so naive as not to
contemplate alternative realities. Now, if awareness of alternatives is not repre-
sented in a belief system itself, then obviously one cannot tell by looking within
the system whether it is consensual or not. Consensuality, one might say, is not a
BELIEF A N D K N O W L E D G E SYSTEMS 357

transparent feature of belief systems, and thus there are cases wehre it is not clear
whether something is a belief system or a knowledge system.
Consider so-called "cultural belief systems" (D'Andrade, 1972). If every
normal member of a particular culture believes in witches, then as far as they are
conceiiitd,lt is not abelief-system;-it is a ~owtedge~system.f h2yYknowab-outr
witches. But the anthropologist who studies this culture is aware of many witch-
less cultures, and thus uses the label "belief system" without flinching. The
other side of this same coin is that scientific knowledge can be viewed as mere
belief by those outside the community of shared presuppositions about science as
a way of knowing.
For cognitive science, the point of this little discussion is that nonconsen-
suality should somehow be exploited if belief systems are to be interesting in
their own right as opposed to knowledge systems. In an A1 belief system program
in particular, this could be done either by (a) representing the awareness of
alternatives, or (b) embodying different belief systems by different data bases.
Carbonell's (1978) PoLIncs program does both of these things to some extent.

2. Belief systems are in part concerned with the existence or nonexistence of


certain conceptual entities. God, E S P , witches, and assassination conspiracies
are examples of such entities. This feature of belief systems is essentially a
special case of the nonconsensuality feature. To insist that some entity exists
implies an awareness of others who believe it does not exist. Moreover, these
entities are usually central organizing categories in the belief system, and as
such, they may play an unusual role which is not typically to be found in the
concepts of straight knowledge systems. The central delusion of persecution
afflicting paranoids would be one example. As in Colby 's (1975) PARRYmodel of
paranoid thinking, just about any topic of conversation is capable of activating
the kernel threat the system perceives itself as facing. Other than by this property
of obsessional centrality, however, it is not possible in inspecting a given belief
system by itself to tell whether a given category embodies a novel existence
belief. That is, "existence categories," like other nonconsensual concepts, are
not a transparent feature of a belief system. By contrast, the following five
features of belief systems are transparent.

3. Belief systems often include representations of "alternative worlds," typi-


cally the world as it is and the world as it should be. Revolutionary or Utopian
ideologies (cf. Kanter, 1972) especially have this character. The world must be
changed in order to achieve an idealized state, and discussions of such change
must elaborate how present reality operates deficiently, and what political, eco-
nomic, social (etc.) factors must be manipulated in order to eliminate the de-
ficiencies. This is, to be sure, a kind of problem-solving, but at a more abstract
level than the usually studied problem-solving tasks in cognitive science. It is not
a matter of finding the sequence of mles to apply to a starting state to reach a
goal; it is a matter of rejecting the old mles and finding new ones which achieve
358 ABELSON

the goal state. In the terminology of Newel1 (1969) and others, this is an "ill-
structured problem. "
4. Belief systems rely heavily on evaluative and affective components. There are
-
-. .
two aspects-to this, one'kognitive; " the other "motivational. "-First ;a-belief
system typically has large categories of concepts defined in one way or another as
themselves "good" or "bad," or as leading to good or bad. For the antinuclear
activist, nuclear power is bad, pollution is bad, callous industry and lax regula-
tion are bad, materialism and waste are bad, natural alternative energy sources
are good, conservation is good, concern about hazards to human life is good, and
so on. These polarities, which exert a strong organizing influence on other
concepts within the system, may have a very dense network of connections rare
in ordinary knowledge systems.
From a formal point of view, however, the concepts of "good" and "bad"
might for all intents and purposes be treated as cold cognitive categoriesjust like
any other categories of a knowledge system. Inferences about goodness and
badness might be governed by rules such as the "balance principle" (Heider,
1946, 1958; Abelson & Rosenberg, 1958). Without going into all the details, an
illustration of a rule following from the balance principle is: If z is good (bad),
and y harms z , then y is bad (good).
An interesting recent artificial intelligence program which refers a lot to
good and bad entities but seems rather more like a knowledge system than a
belief system, is VEGE, in unreported work at Yale by Michael Dyer. VEGE gives
advice on organic vegetable gardening much as a newspaper columnist answer-
ing readers' questions might. The balance principle is implicitly used throughout:
anything which encourages pests or other entities harming vegetables is to be
avoided, anything which discourages those entities is to be welcomed, and so on.
Thus the formal logic of chains of helping and harming (or enabling and disen-
abling) relationships can proceed perfectly reasonably in a knowledge system
having none of the features (I), (2), (3) above characterizing belief systems. Of
course if the chains of relationship are nonconsensual, seeming to follow
"psycho-logic" (Abelson & Rosenberg, 1958) rather than logic (e.g., "Only
bloodshed will lead to peace"), then we are back on the ground of prototypic
belief systems again.
When the good and bad entities for the system have motivational force
rather than simply categorical status, unique consequences for belief systems are
even more likely to emerge. By motivational force I mean that when affectively
toned entities are activated, the processes of the system are altered. Thus a
system that found some input exciting would process it more deeply, or if fearful
would avoid it, and so on. A specialized instance of this possibility occurs in the
conversational behavior of Colby 's (1975) paranoid PARRY program, which
changes rather drastically as a function of its levels of fear, anger, and anxiety.
5. Belief system are likely to include a substantial amount of episodic material
from either personal experience or (for cultural belief systems) from folklore or
BELIEF A N D KNOWLEDGE SYSTEMS 359

(for political doctrines) from propaganda. Several years ago I interviewed in


depth a number of strong believers in ESP and found that very frequently some
striking personal experience was pivotal (Ayeroff & Abelson, 1976). One
woman reported
.- - -
a confmation--
of--a -vision
-
she had of-her
..- - -husband
- - being
- killed-in
- - --
an accident, another *on claimed that he was able on particular occasions to
know what his totally paralyzed sister was thinking, and so on. The force of such
episodes is sometimes as subjective "proof" of a belief, especially an existence
belief like that of ESP, and sometimes as an illustration or object lesson to enrich
a particular concept. For the belief system of Senator Barry Goldwater, for
example, the time when the Russians built the Berlin Wall was a crucial opportu-
nity lost. The Free World could have stood up to the Communists and changed
the realities of the future rather than passively acquiescing to an unsatisfactory
present.
Many knowledge systems have no apparent need for such episodes, relying
instead entirely on general facts and principles. It would seem odd for a program
that understood let us say, the physics of balls rolling down inclined planes, to
appeal to anecdotes such as the "time in May, 1938 when in Chicago a 20 kg.
ball was rolled down a 30-degree incline." I do not mean to suggest, however,
that knowledge systems necessarily must lack episodic material. Especially if the
knowledge concerns social reality, as with the scripts, plans, goals and themes of
Schank and Abelson's (1977) theory, a program could appropriately use com-
bined episodic and semantic memory features. Indeed, Schank's (in press) new
theorizing about MOPS has precisely this character.

6. The content set to be included in a belief system is usually highly "open."


That is, it is unclear where to draw a boundary around the belief system, exclud-
ing as irrelevant concepts lying outside. This is especially true if personal
episodic material is important in the system. Consider, for example, a parental
belief system about the irresponsibility and ingratitude of the modem generation
of youth. Suppose, as might very well be the case, that central to this system is a
series of hurtful episodes involving the believer's own children. For these
episodes to be intelligible, it would be necessary for the system to contain
information about these particular children, about their habits, their develop-
ment, their friends, where the family lived at the time, and so on. And one would
have to have similar conceptual amplification about the "self" of the believer.
Each amplified concept would relate to new concepts themselves needing
amplification, and there might be no end to it. Colby (1973), referring to an
interviewing project attempting to elicit the belief system of a respondent about
child-rearing, reports precisely this problem. Is it relevant to the subject's beliefs
about child-rearing to consider her beliefs about nutrition? About safety in the
streets? About sex?
Now of course the same problem is encountered with knowledge systems.
Openess is often a matter of degree. An expert on, say, moon rocks, might well
need to know a lot about cosmology, geology, physical chemistry, and
360 ABELSON

mathematics, and the appropriate boundaries in each of these disciplines might


not be well-defined because each bit of knowledge would drag new bits into the
system.
The reason I list unboundedness as a distinctive feature of belief systems is --
-
that b3liEf sj%teHs~always3ecess~1~implic~kihe selfGnc2pi of thibTliever at -
some level, and self-concepts have wide boundaries, indeed. On the other hand,
/ knowledge systems usually exclude the Self, and limitation to restricted problem
areas is conceivable. In order to avoid open boundaries, certain artificial intelli-
gence programs (e.g., Winograd, 1972) have been exercised in highly cir-
cumscribed "small world" knowledge domains. The belief system theorist is
usually hard put to find "small worlds. "
7. Beliefs can be held with varying degrees of certitude. The believer can be
passionately committed to a point of view, or at the other extreme could regard a
state of affairs as more probable than not, as in '.'I believe that micro-organisms
will be found on Mars." This dimension of variation is absent from knowledge
systems. One would not say that one knew a fact strongly.
There exist some examples of attempts to model variable credences or
"'confidence weights" of beliefs (Colby, Tesler & Enea, 1969; Becker, 1973)
and how these change as a function of new information. A distinction should be
made between the certitude attaching to a single belief and the strength of
attachment to a large system of beliefs. Philosophical ayalyses of the difference
between believing and knowing often implicate just single beliefs. But what is
more interesting for cognitive science is the distinction at the systemic level.
Whole belief systems are usually at least as confidently held as whole knowledge
systems, even if they contain some weak individual beliefs. The key theoretical
problems lie in the relationships between the credences attached to single beliefs
and to the system containing those beliefs.
In summary, I have sketched seven features which seem to characterize
belief systems as distinct from knowledge systems. These are: nonconsensuality,
"existence beliefs," altemative worlds, evaluative components, episodic mate-
rial, unboundedness, and variable credences. None of these features is individu-
ally guaranteed to distinguish belief from knowledge; in combination they are
very likely to do so.

B. THE DIFFICULTY AND THE IMPORTANCE OF


STUDYING BELIEF SYSTEMS

Belief systems have not been very popular objects of study in AI. Partly this is
because the backgrounds of most people in co'mputer science have led them to
prefer more formal problems arising from mathematics or linguistics, and partly
because belief systems present substantial modeling difficulties. The coming of
age of cognitive science has introduced to A1 a much wider range of disciplines,
BELIEF A N D KNOWLEDGE SYSTEMS 361

anthropology for example, and we can expect more motivation in the future for
studying belief systems. But the difficulties will not so easily go away, and it is
well to consider what they are.
-- Eachof _theseven distillgubhingfeatuyes of belief systems creates t_h_eore& _
cal or practical difficulties. Most of these difficulties, fortunately, produce in-
teresting opportunities for needed new developments in cognitive science. We
consider each of the seven features in turn.
Nonconsensuality poses a problem because it is inefficient to design a huge
system for a single target case. It is neither very easy nor very convincing to try
to validate a model that simulates an individual case. There is always the suspi-
cion that the model was fitted ad hoc, and there is always the sense of disap-
pointment that the model is not generalizable. The trick, therefore, is to encom-
pass individual variations within a common systemic framework, rather than
having to start all over again each time a new individual is to be modeled.
One excellent strategy for accomplishing this has been provided by Car-
bone11 (1978). In his POLITICS program for representing American foreign policy
views, virtually all of the conceptual elements are shared among belief systems
of different shades. What differs between, say, left- and right-wing views is the
network of perceived goal priorities for the major international actors. Thus the
left-winger sees the Russians as very interested in preserving world peace and not
much willing to push too hard to gain control of more temtory, whereas the
right-winger perceives the importance of these two goals to the Russians in the
opposite relative order. This simple cognitive device turns out to produce major
differences in systemic interpretation of real-world events, because the system
relies very heavily on attributions about plans and goals. How far this device can
be pushed to handle nonconsensuality in general remains, to be seen.
Existence beliefs as special cases of nonconsensuality are more trou-
blesome. If what exists for one believer does not exist for the other, and these
privileged constructs are highly central, then the common content core between
the different systems declines drastically. Any general framework between sys-
tems would then have to rely on similar processes operating on disparate con-
tents. While this is certainly conceivable, it is not totally satisfying.
Furthermore, it is a possibility that systems with quite different existence
beliefs might use different processing modes. A magician, a statistician, and a
believer in ESP might have very little in common in the ways they reasoned
about an apparent instance of mind-reading. To model the ways separately, each
in its own terms, might be instructive and fun, but would not do much to cure the
ad hoc situation characterizing the modeling of a single nonconsensual belief
system. Thus I see differences in existence beliefs as one of the most vexing of
the characteristic features of belief systems.
There has been a little bit of interest in A1 in alternate worlds problems.
The way this usually arises is that mental representations must often contain
models of other mental representations. In the simplest sort of case, knowing
whether someone else knows something may be a crucial piece of knowledge for
362 ABELSON

coherent planning or understanding of plans. When X asks Y something (in


Schank & Abelson's, 1977, jargon, when the "ASK planbox for DELTA-
KNOW" is used), X presumes that Y might know the answer, and Y presumes
- that X dqes not know the answerlcf. Meehan, 1976, for-.the confusion that - - --

results when these presumptions are blatantly violated). There are more complex
cases such as those that arise in the semantic analysis of concepts such as
"promise" or "request," etc. If I hear you agree to my request to do X, then
among other things (Bruce, 1975), I may believe that you believe that I believe
that you can do X. Here there is a representation of a representation of a repre-
sentation.
In as yet unpublished work, Yorick Wilks has undertaken a systematic
analysis of the rules by which embedded representations might operate without
totally boggling the system containing them. He considers examples such as
Interlocutor telling System, "I don't think Hany likes you; but don't tell him I
told you. " Comprehension of this requires System to cognize Harry's representa-
tion of Interlocutor's representation of Harry's representation of System. (Or
something of the sort.) Fortunately for most belief system work, the alternative
worlds problem is not as complicated as this. For single beliefs one might worry
about how to deal with embedded conceptualizations such as "I believe that you
.
believe that I believe. . ,etc., " but I do not know of anyone who is concerned
with modeling a system of beliefs about a system of beliefs about a system of
beliefs. Real-world belief systems are not often tortured by sinuous regress. As a
believer I know my own values and prospects, and the values and prospects of
my enemies and my friends. Alternative worlds come down largely to alternative
goals and plans, the system vs. its enemies. Knowledge of "counterplanning"
strategies is useful, and judging by the various developments in Carbonell
(1978), Bruce and Newman (1978), and Wilensky (1978), the representation of
such strategies is tractable.
The evaluative components of belief systems give rise to the least well-
understood problems. Earlier I said that evaluations of entities as good or bad
create important cognitive categories, but that the more unique consequences of
evaluation are motivational rather than solely cognitive. While intuitively it seems
clear what is meant by "motivational," in practice the distinction between cogni-
tive and motivational is quite subtle. This is especially true in the traditions of
artificial intelligence and information processing psychology, where the habit of
mind is to treat everything in some sense as cognitive.
One can usually, perhaps always, simulate motivational phenomena by
cognitive ones. An example will clarify this assertion. Some years ago, while
first trying to simulate by computer the belief system processes (Abelson &
Carroll, 1965) that characterized, say, the foreign policy views of Barry Goldwa-
ter, we included a motivated process of evaluative inconsistency reduction.
Every time the system encountered a good actor involved in a bad action, or a
bad actor involved in a good action, it went into a trouble-shooting mode de-
BELIEF A N D KNOWLEDGE SYSTEMS 363

signed to explain away such a state of affairs. A major heuristic was rationaliza-
tion (Abelson, 1963): for example, a good actor might perform a bad action
because forced to by a bad actor, or because the bad action was only temporary,
leading to a greater go-odjn-the long run. Another heuristic was-denial,-whereby
-
the system refused to accept that the good actor was performing a bad action.
These heuristics all seemed quite realistic. Later I realized, however, that incon-
sistency resolutions need not be "computed" time after time. The system can
simply store an acceptable resolution to every familiar dilemma. What once may
have been a motivated process becomes frozen into a cognitive package. Thus if
we told the Goldwater Machine that the United States was harming innocent
Vietnamese civilians, and it replied that this was in the service of freedom, an
observer could not tell if this response was a motivated rationalization cooked up
on the spot, or a well-rehearsed answer given entirely without either anguish or
thinking. If alternatively the system denied the possibility that the United States
was harming innocent civilians, an observer could not distinguish this as a "hot"
denial process from a cold search through an established conceptual taxonomy in
which such an event is simply not recognized because there is no script or plan
which could plausibly contain it. The response itself does not reveal its own
motivational origins.
There is presently roiling an interesting debate in social psychology around
this very issue. In "attribution theory" (Jones et al., 1971), the study of how
people assign causal responsibility for observed events, there is a controversy
about whether so-called "defensive attribution" exists. That is, are there special
processes by which people defend themselves against unwanted blame for unfor-
tunate events, or do seemingly defensive phenomena arise merely as a side-effect
of normal habits and categories of mind (Nisbett & Ross, 1979)? Recently,
Tetlock and Levi (1980) have argued that there is no definitive way to settle this
argument.
Because of the undecidability between hot and cold origins for any given
response, it is tempting for the model-builder to ignore emotional explanations in
favor of cognitive explanations. It is more parsimonious and it is more congenial
to people trained to study cognition. But I, along with Donald Norman (in press),
have the growing sense that this is a mistake, that by so doing we might be
closing off several generations of potential developments in cognitive science.
While an output response itself may not reveal its genesis, the manner in which
the response is given (e.g., its speed), and the history of the response over
repeated exposure to appropriate activators may be very sensitive to its internal
design. Evaluative or affective responses may not in fact simply be the same as
cognitive judgments of good and bad. The social psychologist Robert Zajonc
(1979) has presented seemingly persuasive evidence that affective responses to
stimuli can occur prior to and independent from later cognitive responses.
Briefly described, his prototypic demonstration is as follows: Subjects are
shown a number of members from a set of unfamiliar stimuli (Japanese ideo-
364 ABELSON

graphs or Turkish words, say). Some members are displayed often, some rarely
under rapid exposure conditions making it difficult for subjects to articulate how
often each is seen. Zajonc (1968) had previously demonstrated under standard
conditions
-- - -- - -exposure~effect
a- "mere - - on liking-previously novel-stimuli seen _
often are judged more pleasant than those seen once or twice. In the critical
demonstrations under rapid exposure, subjects might not be able to recognize
consciously which of two symbols they had seen more often, but could neverthe-
less reliably state a preference for the one they had indeed seen more often.
Preference without recognition is attributed by Zajonc to a fast, crude, affective
response system using cues picked up before the slower cognitive system reads
the stimuli. He bolsters his position by the functional argument that animals have
a need for fast responses to threats and opportunities in the environment without
waiting for careful stimulus categorization. Rabbits run from apparent hawks
which closer inspection would often show to be nonhawks.
This provocative dual process theory seems certain to become a focus for
psychological research and debate. In any case, the ability of belief systems to
stir and express the passions of believers is an essential feature not to be found in
knowledge systems, well worth our groping theoretical efforts to try to under-
stand it. Indeed, it is at the root of criticisms of A1 programs, sounded by Searle
(1980), by Dreyfus (1979), and by others1 that programs lack passions and
purposes of their own. Of course these critics claim that programs are intrinsi-
cally incapable of anything resembling feeling. To what degree and in what sense
this claim is true or false remains to be seen. But it will not be seen if we don't try
to look.
How to deal with episodic material is another very crucial and very hard
problem. As noted, A1 programs tend not to incorporate episodic knowledge.
Partly this is because it is unclear how to integrate episodic with "semantic"
information, and partly because to do it right, one ought to incorporate episodic
material historically, as it arises. But this involves "growing" a program, rather
than giving birth to it whole, and one is confronted again with the same ineffi-
ciency that characterizes the study of nonconsensual systems. Nevertheless, it is
essential to cope with episodic knowledge in order to have, among other things,
an adequate theory of memory. Schank's (in press) recent theorizing about mem-
ory takes the position that being reminded of past episodes is the essence of the
understanding of present episodes. Higher-level knowledge structures such as
scripts and plans serve as spines to which episodes are attached. No program,
either knowledge system or belief system, has yet exercised such a scheme, but it
seems a promising area for development.
I have nothing very encouraging to say about the problem of unbounded-
ness for personal belief systems, although for political or cultural belief systems
in circumscribed areas, it seems possible to creatq bounded small worlds for
modeling purposes. I do not know of any theoretical work which would guide us
'Weizenbaum (1976).
BELIEF A N D K N O W L E D G E SYSTEMS 365

in deciding whether there is substantial loss of validity when boundaries are


somewhat arbitrarily drawn. Probably for a long time this will remain a matter of
judgment.
Finally, and least hopefully, the problem of-variablecredences-seemsto-
me to be a tantalizing quagmire. It is very messy to write models in which each
belief is indexed with a numerical or quasi-numerical credence value which goes
up or down depending on the fate of the evidence supporting it or supporting
related beliefs. However, if we are ever to model change in belief systems (surely
a very pivotal phenomenon in human affairs), then it seems as though we need to
model the mechanism by which doubt spreads corrosively from belief to belief
within a formerly stable system until a kind of "catastrophe" point is reached
when whole bundles of beliefs are abandoned in favor of new ones. Possibly the
assignment of numerical credences existing through time is not the way to ap-
proach this problem. Perhaps credences are somehow calculated only when
needed in the face of some challenge, but otherwise a belief is a belief is a belief.
I believe that this assertion is an insight of sorts, but I confess it doesn't tell us
how to model the phenomenon in which doubt spreads through a belief system.
Let me sum up the whole matter of belief systems as follows: The list of
seven distinguishing features I have given is in effect a compendium of many
frontier problems in cognitive science. One need not study belief systems as such
to care about these problems, although a concern with belief systems makes these
problems more obvious and more pressing. Some of the difficulties raised seem
largely of a technical nature: how to model alternative worlds or variable cre-
dences, how to grapple with unboundedness, or how to integrate episodic and
semantic knowledge. But in a larger sense most of the problems go deeper, and
lie at the heart of what we are willing to include in a cognitive science. I hope that
an increasing number of investigators will begin to try to model nonconsensual
mental systems, and affective processes, and the ways in which personal experi-
ence enriches knowledge. Otherwise our models will not be fully faithful to the
natural vicissitudes of the human mind.

REFERENCES

Abelson, R. P. Computer simulation of 'hot cognition'. In S. Tomkins and S. Messick (eds.)


Computer simulation of personality. New York: Wiley, 1963.
Abelson, R. P. & Carroll, J. D. Computer simulation of individual belief systems. American Be-
havioral Scientist, 1965, 8 , 24-30.
Abelson, R. P. & Rosenberg, M. J. Symbolic psycho-logic: A model of attitudinal cognition.
Behavioral Science. 1958,4, 1-12.
Ayeroff, F. & Abelson, R. P. ESP and ESB: Belief in personal success at mental telepathy. Journal
of Personality and Social Psychology, 1976,34, 2W247.
Becker, J. D. A model for the encoding of experimental information In R . C. Schank and K . M.
Colby (eds.) Computer models of thought and language. San Francisco: Freeman, 1973.
Bruce, B. & Newman, D. Interacting plans. Cognitive Science, 1978,2, 19?-194.
366 ABELSON

Bruce, B. Belief systems and language understanding. A1 repon Number 21. Bolt, Beranek and
Newman. January 1975.
Carbonell. J. G. Jr. POLITICS: Automated ideological reasoning. Cognitive Science, 1978.2. 1-15.
Colby, K. M. Simulations of belief systems. In R. C. Schank and K. M. Colby (eds.) Computer
.
.
~ - ~ . - - m o d e ~ . . o f - t h o U g h t t a n d n d9 )[ a n- g u o- g - - - - - - -
Colby. K. M., Tesler, L. & Enea, H. Experiments with a m h algorithm on the data base of a
A1 Project memo AI-94, August, 1969.
Colby, K. M. Artificial paranoia: A computer simulation of'l~uranoidprocesses. New York: Perga-
mon, 1975.
D'Andrade, R. Cultural belief systems. University of California at San Diego. Manuscript prepared
as a report to the National Institute of Mental Health Committee on Social and Cultural
Recesses, November 1972.
Dreyfus, H. L. Whar computers can't do: The limits of artificial intelligence. (Revised Edition). New
York: Harper, 1979.
Heider, F. Attitudes and wgnitive organization. Journal of Psychology, 1946,21, 107-112.
Heider. F. The psychology of interpersonal relations. New York: Wiley, 1958.
Kanter, R. M. Commitment and community: Communes and utopias in sociological perspective.
Cambridge. Mass. Harvard University Press, 1972.
Jones, E. E., Kanouse, D. E., Kelly, H. H., Nisbett, R. E., Valins, S. & Weiner. B. Attribution:
Perceiving the causes of behuvior. Morristown: General Leaming Press, 1971.
Meehan, J. The metanovel: Writing stories by computer. Ph.D. dissertation. Yale University, 1976.
A. Heuristic programming: Ill-structured problems. In J. Aronofsky (4s.) Progress in
operations research, Vol. Ill. New York: Wiley, 1969.
Norman, D. Twelve issues for cognitive science. Cognitive Science, in press.
Rosch, E. & Mervis, C. B. Family resemblances: studies in the internal structure of categories.

Schank, R. Language and memory. Cognitive Science, in press.


Schank, R. C. & Abelson, R. P. Scripts, plans, goals and understanding. Hillsdale, N.J .: Laurence
Erlbaum Associates, 1977.
Searle, J. Notes on artificial intelligence. Behaviord and Brain Sciences. In press, 1980.
Smith, E. E.. Shoben, E. J. & Rips. L. J. Structures and process in semantic memory: A featural
model for semantic decisions. Psychological Review. 1974,81, 214-241.
Tetlock, P. E. & Levi. A. Attribution bias: On the inconclusiveness of the cognition-motivation
debate. Yale University, manuscript submitted for publication.
Wilensky, R. Understanding goal-based stories. Ph.D. dissertation. Yale University, 1978.
Winograd, T. Understanding natural language. New York: Academic F'ress. 1972.
Weizenbaum, J. Computer power and human reason. San Francisco: Freeman, 1976.
Zajonc, R. The anitudinal effects of mere exposure. Journal of Personality and Social Psychology
Monograph Supplement. 1968, 9 , No. 2, 1-27.
Zajonc, R. Feeling and thinking: Preferences need no inferences. Address delivered at the American
Psychological Association Convention, New York. September, 1979.
Abelson, R. P., 353 .lohnson-Laird, P. N., 173
Beaugrande de, R., 43 Karmiloff-Smith, A., 91
Black, J. B., 213
Bobrow, D. G., 29 Lehnert, W., 1
Chaffin, R., 3 11
Cohen, P. R., 177 Perrault, C. R., 177
Colby, B. N., 43
Rich, E., 329
DeJong. G., 25 1
Salveter, S. C., 141
Getzels, J. W., 167

Hayes-Roth, B.,' 119, 275 Walker, C.. 119


Hayes-Roth, F., 275 Wilensky, R., 213
Hinton, G., 231 Wilks, Y., 1
Hobbs, J. R., 67 Winograd, T., 29
Number I January-March, 1979

A Critical Perspective on KRL ...................... .


................................................... I
Wendy L e h e n and Yorick Wilks

KRL: Another Perspective................................................................................... 29


Daniel G . Bobrow and Terry Winograd

Narrative Models of Action and Interaction.............................................................. 43


Robert De Beaugrande and Benjamin N . Colby

Coherence and Coreference ................................................................................... 67


Jerry R. Hobbs

Number 2 April-June, I979

Micro- and Macrodevelopmental Changes in Language


Acquisition and Other Representational Systems .................................................. 91
Annene Karmiloff-Smith

Configural Effects in Human Memory: The Superiority


of Memory over External Information Sources
as a Basis for Inference Verification .................................................................. 119
Barbam Hayes-Roth and Carol Walker

Infening Conceptual Graphs ................... .


......................................................141
Sharon C . Salveter

THEORETICAL NOTES

Roblem Finding: A Theoretical Note ...............................................................167


J. W . Getzels

Will There Be Any Neat Solutions to Small Problems


.
.
in Cognitive Science? ................. .............................................................. 173
P. N . Johnson-Laird
Number 3 July-September, 1979

Elements of a Plan-Based Theory of Speech Acts ,.; ...............................


- - - - ........: .......4 7 7 -
Philip R . Cohen and C . Raymond Perrault

An Evaluation of Story Grammars ..................................................................... 2 13


John B . Black and Roberr Wilensky

Some Demonstrations of the Effects of Structural


Descriptions in Mental Imagery.. ...................................................................... 23 1
Geoffrey Hinron

Prediction and Substantiation: A New Approach to


Natural Language Processing.. ....................................................................... 25 1
Gerald DeJong

Number 4 October-December, 1979

A Cognitive Model of Planning ............................................................................275


Barbara Hayes-Rorh and Frederick Hayes-Rorh

Knowledge of Language and Knowledge about the World:


A Reaction Time Study of Invited and Necessary Inferences ...........................
...... 3 1 1
Roger Chafin

User Modeling via Stereotypes ............................................................................ 329


Elaine Rich

THEORETICAL NOTE

Differences Between Belief and Knowledge Systems ..................................................355


Roberr P . Abelson

Author Index to Volume 3 ..................................................................................


367

Contents of Volume 3 ........................................... .. ......................................... jfja

Acknowledgment: Guest Referees. .........................................................................


370
We gratefully acknowledge the editorial advice and assistance we received
from the following people in compiling Volume 3 of Cognitive Science.

Larry Birnbaum Michael Lebowitz


Bertram C. Bruce Wendy Lehnert
Jaime G . Carbonell Lauren Resnick
Johan de Kleer Albert L. Stevens
James G. Greeno Kurt Van Lehn
Janet Kolodner Robert Wilensky

Das könnte Ihnen auch gefallen