Sie sind auf Seite 1von 36

--===Precis of "The creative mind: Myths and mechanisms"===--

Below is the unedited preprint (not a quotable final draft) of:


Boden, Margaret A. (1994). Precis of The creative mind: Myths and mechanisms. *B
ehavioral and
Brain Sciences* 17 (3): 519-570.
The final published draft of the target article, commentaries and Author's Respo
nse are
currently available only in paper.
--------------------------------------------------------------------------------
---------------
__ *For information on becoming a commentator on this or other BBS target article
s, write to:
bbs@soton.ac.uk [bbs@cogsci.soton.ac.uk]
For information about subscribing or purchasing offprints of the published versi
on, with
commentaries and author's response, write to:
journals_subscriptions@cup.org
[journals_subscriptions@cup.org] (North America) or
journals_marketing@cup.cam.ac.uk
[journals_marketing@cup.cam.ac.uk] (All other countries). *__
--------------------------------------------------------------------------------
---------------
-===Precis of "THE CREATIVE MIND: MYTHS AND MECHANISMS" London: Weidenfeld &Nico
lson 1990
(Expanded edn., London: Abacus, 1991.) ===-
--======--
Margaret A. Boden
School of Cognitive and Computing Sciences
University of Sussex
England FAX: 0273-671320
maggieb@syma.susx.ac.uk

-==Keywords==-
*creativity, intuition, discovery, association, induction, representation, unpr
edictability,
artificial intelligence, computer music, story-writing, computer art, Turing tes
t *
-==Abstract==-
What is creativity? One new idea may be creative, while another is merely new:
what's the
difference? And how is creativity possible? -- These questions about human creat
ivity can be
answered, at least in outline, using computational concepts.
There are two broad types of creativity, improbabilist and impossibilist. Improb
abilist
creativity involves (positively valued) novel combinations of familiar ideas. A
deeper type
involves METCS: the mapping, exploration, and transformation of conceptual space
s. It is
impossibilist, in that ideas may be generated which -- with respect to the parti
cular
conceptual space concerned -- could not have been generated before. (They are ma
de possible by
some transformation of the space.) The more clearly conceptual spaces can be def
ined, the
better we can identify creative ideas. Defining conceptual spaces is done by mus
icologists,
literary critics, and historians of art and science. Humanist studies, rich in i
ntuitive
subtleties, can be complemented by the comparative rigour of a computational app
roach.
Computational modelling can help to define a space, and to show how it may be ma
pped,
explored, and transformed. Impossibilist creativity can be thought of in "classi
cal" AI-terms,
whereas connectionism illuminates improbabilist creativity. Most AI-models of cr
eativity can
only explore spaces, not transform them, because they have no self-reflexive map
s enabling
them to change their own rules. A few, however, can do so.
A scientific understanding of creativity does not destroy our wonder at it, nor
make creative
ideas predictable. Demystification does not imply dehumanization.
--------------------------------------------------------------------------------
---------------
Chapter 1: The Mystery of Creativity
Creativity surrounds us on all sides: from composers to chemists, cartoonists to
choreographers. But creativity is a puzzle, a paradox, some say a mystery. Inven
tors,
scientists, and artists rarely know how their original ideas arise. They mention
intuition,
but cannot say how it works. Most psychologists cannot tell us much about it, ei
ther. What's
more, many people assume that there will never be a scientific theory of creativ
ity -- for how
could science possibly explain fundamental novelties? As if all this were not da
unting enough,
the apparent unpredictability of creativity seems (to many people) to outlaw any
systematic
explanation, whether scientific or historical.
Why does creativity seem so mysterious? Artists and scientists typically have th
eir creative
ideas unexpectedly, with little if any conscious awareness of how they arose. Bu
t the same
applies to much of our vision, language, and common-sense reasoning. Psychology
includes many
theories about unconscious processes. Creativity is mysterious for another reaso
n: the very
concept is seemingly paradoxical.
If we take seriously the dictionary-definition of creation, "to bring into being
or form out
of nothing", creativity seems to be not only beyond any scientific understanding
, but even
impossible. It is hardly surprising, then, that some people have "explained" it
in terms of
divine inspiration, and many others in terms of some romantic intuition, or insi
ght. From the
psychologist's point of view, however, "intuition" is the name not of an answer,
but of a
question. How does intuition work?
In this book, I argue that these matters can be better understood, and some of t
hese questions
answered, with the help of computational concepts.
This claim in itself may strike some readers as absurd, since computers are usua
lly assumed to
have nothing to do with creativity. Ada Lovelace is often quoted in this regard:
"The
Analytical Engine has no pretensions whatever to originate anything. It can do [
only] whatever
we know how to order it to perform." If this is taken to mean that a computer ca
n do only what
its program enables it to do, it is of course correct. But it does not follow th
at there can
be no interesting relations between creativity and computers.
We must distinguish four different questions, which are often confused with each
other. I call
them Lovelace questions, and state them as follows:
(1) Can computational concepts help us to understand human creativity?
(2) Could a computer, now or in the future, appear to be creative?
(3) Could a computer, now or in the future, appear to recognize creativity?
(4) Could a computer, however impressive its performance, really be creative?
The first three of these are empirical, scientific, questions. In Chapters 3-10,
I argue that
the answer to each of them is "Yes". (The first Lovelace question is discussed i
n each of
those chapters; in chapters 7-8, the second and third are considered also.)
The fourth Lovelace question is not a scientific enquiry, but a philosophical on
e. (More
accurately, it is a mix of three complex, and highly controversial, philosophica
l problems.) I
discuss it in Chapter 11. However, one may answer "Yes" to the first three Lovel
ace questions
without necessarily doing so for the fourth. Consequently, the fourth Lovelace q
uestion is
ignored in the main body of the book, which is concerned rather with the first t
hree Lovelace
questions.
Chapter 2: The Story so Far
This chapter draws on some of the previous literature on creativity. But it is n
ot a survey.
Its aim is to introduce the main psychological questions, and some of the histor
ical examples,
addressed in detail later in the book. The main writers mentioned are Poincare (
1982),
Hadamard (1954) Koestler (1975), and Perkins (1981).
Among the points of interest in Poincare's work are his views on associative mem
ory. He
described our ideas as "something like the hooked atoms of Epicurus," flashing i
n every
direction like "a swarm of gnats, or the molecules of gas in the kinematic theor
y of gases".
He was well aware that how the relevant ideas are aroused, and how they are join
ed together,
are questions which he could not answer in detail. Another interesting aspect of
Poincare's
approach is his distinction between four "phases" of creativity, some conscious
some
unconscious.
These four phases were later named (by Hadamard) as preparation, incubation, ins
piration and
verification (evaluation). Hadamard, besides taking up Poincare's fourfold disti
nction, spoke
of finding problem-solutions "quite different" from any he had previously tried.
If (as
Poincare had claimed) the gnat-like ideas were only "those from which we might r
easonably
expect the desired solution", then how could such a thing happen?
Perkins has studied the four phases, and criticizes some of the assumptions made
by Poincare
and Hadamard. In addition, he criticizes the romantic notion that creativity is
due to some
special gift. Instead, he argues that "insight" involves everyday psychological
capacities,
such as noticing and remembering. (The "everyday" nature of creativity is discus
sed in Chapter
10.)
Koestler's view that creativity involves "the bisociation of matrices" comes clo
sest to my own
approach. However, his notion is very vague. The body of my book is devoted to g
iving a more
precise account of the structure of "matrices" (of various kinds), and of just h
ow they can be
"bisociated" so as to result in a novel idea -- sometimes (as in Hadamard's expe
rience) one
quite different from previous ideas. (Matrices appear in my terminology as conce
ptual spaces,
and different forms of bisociation as association, analogy, exploration, or tran
sformation.)
Among the examples introduced here are Kekule's discovery of the cyclical struct
ure of the
benzene molecule, Kepler's (and Copernicus') thoughts on elliptical orbits, and
Coleridge's
poetic imagery in Kubla Khan. Others mentioned in passing include Coleridge's an
nounced
intention to write a poem about an ancient mariner, Bach's harmonically systemat
ic set of
preludes and fugues, the jazz-musician's skill in improvising a melody to fit a
chord
sequence, and our everyday ability to recognize that two different apples fall i
nto the same
class. All these examples, and many others, are mentioned in later chapters.
Chapter 3: Thinking the Impossible
Given the seeming paradoxicality of the concept of creativity (noted in Chapter
1), we need to
define it carefully before going further. This is not straightforward (over 60 d
efinitions
appear in the psychological literature (Taylor, 1988)). Part of the reason for t
his is that
creativity is not a natural kind, such that a single scientific theory could exp
lain every
case. We need to distinguish "improbabilist" and "impossibilist" creativity, and
also
"psychological" and "historical" creativity.
People of a scientific cast of mind, anxious to avoid romanticism and obscuranti
sm, generally
define creativity in terms of novel combinations of familiar ideas. Accordingly,
the surprise
caused by a creative idea is said to be due to the improbability of the combinat
ion. Many
psychometric tests designed to measure creativity work on this principle.
The novel combinations must be valuable in some way, because to call an idea cre
ative is to
say that it is not only new, but interesting. However, combination-theorists oft
en omit value
from their definition of creativity (although psychometricians may make implicit
value-judgements when scoring the novel combinations produced by their experimen
tal subjects).
A psychological explanation of creativity focusses primarily on how creative ide
as are
generated, and only secondarily on how they are recognized as being valuable. As
for what
counts as valuable, and why, these are not purely psychological questions. They
also involve
history, sociology, and philosophy, because value-judgments are largely culture-
relative
(Brannigan, 1981; Schaffer, in press.) Even so, positive evaluation should be ex
plicitly
mentioned in definitions of creativity.
Combination-theorists may think they are not only defining creativity, but expla
ining it, too.
However, they typically fail to explain how it was possible for the novel combin
ation to come
about. They take it for granted, for instance, that we can associate similar ide
as and
recognize more distant analogies, without asking just how such feats are possibl
e. A
psychological theory of creativity needs to explain how associative and analogic
al thinking
works (matters discussed in Chapters 6 and 7, respectively).
These two cavils aside, what is wrong with the combination-theory? Many ideas wh
ich we regard
as creative are indeed based on unusual combinations. For instance, the appeal o
f
Heath-Robinson machines lies in the unexpected uses of everyday objects; and poe
ts often
delight us by juxtaposing seemingly unrelated concepts. For creative ideas such
as these, a
combination-theory, supplemented by psychological explanations of association an
d analogy,
might suffice.
Many creative ideas, however, are surprising in a deeper way. They concern novel
ideas that
not only did not happen before, but which -- we intuitively feel -- could not ha
ve happened
before.
Before considering just what this "could not" means, we must distinguish two fur
ther senses of
creativity. One is psychological, or personal: I call it P-creativity. The other
is
historical: H-creativity. The distinction between P-creativity and H-creativity
is independent
of the improbabilist/impossibilist distinction made above: all four combinations
occur.
However, I use the P/H distinction primarily to compare cases of impossibilist c
reativity.
Applied to impossibilist examples, a valuable idea is P-creative if the person i
n whose mind
it arises could not (in the relevant sense of "could not") have had it before. I
t does not
matter how many times other people have already had the same idea. By contrast,
a valuable
idea is H-creative if it is P-creative and no-one else, in all human history, ha
s ever had it
before.
H-creativity is something about which we are often mistaken. Historians of scien
ce and art are
constantly discovering cases where other people have had an idea popularly attri
buted to some
national or international hero. Even assuming that the idea was valued at the ti
me by the
individual concerned, and by some relevant social group, our knowledge of it is
largely
accidental. Whether an idea survives, and whether historians at a given point in
time happen
to have evidence of it, depend on a wide variety of unrelated factors. These inc
lude flood,
fashion, rivalries, illness, trade-patterns, and wars.
It follows that there can be no systematic explanation of H-creativity, no theor
y that
explains all and only H-creative ideas. For sure, there can be no psychological
explanation of
this historical category. But all H-creative ideas, by definition, are P-creativ
e too. So a
psychological explanation of P-creativity would include H-creative ideas as well
.
What does it mean to say that an idea "could not" have arisen before? Unless we
know that, we
cannot make sense of P-creativity (or H-creativity either), for we cannot distin
guish radical
novelties from mere "first-time" newness.
An example of a novelty that clearly could have happened before is a newly-gener
ated sentence,
such as "The deckchairs are on the top of the mountain, three miles from the art
ificial
flowers". I have never thought of that sentence before, and probably no-one else
has, either.
Chomsky remarked on this capacity of language-speakers to generate first-time no
velties
endlessly, and called language "creative" accordingly. But the word "creative" w
as ill-chosen.
Novel though the sentence about deckchairs is, there is a clear sense in which i
t could have
occurred before. For it can be generated by any competent speaker of English, fo
llowing the
same rules that can generate other English sentences. To come up with a new sent
ence, in
general, is not to do something P-creative.
The "coulds" in the previous paragraph are computational "coulds". In other word
s, they
concern the set of structures (in this case, English sentences) described and/or
produced by
one and the same set of generative rules (in this case, English grammar). There
are many sorts
of generative system: English grammar is like a mathematical equation, a rhyming
-schema for
sonnets, the rules of chess or tonal harmony, or a computer program. Each of the
se can
(timelessly) describe a certain set of possible structures. And each might be us
ed, at one
time or another, in actually producing those structures.
Sometimes, we want to know whether a particular structure could, in principle, b
e described by
a specific schema, or set of abstract rules. -- Is "49" a square number? Is 3,59
1,471 a prime?
Is this a sonnet, and is that a sonata? Is that painting in the Impressionist st
yle? Could
that geometrical theorem be proved by Euclid's methods? Is that word-string a se
ntence? Is a
benzene-ring a molecular structure describable by early nineteenth-century chemi
stry (before
Kekule had his famous vision in 1865)? -- To ask whether an idea is creative or
not (as
opposed to how it came about) is to ask this sort of question.
But whenever a structure is produced in practice, we can also ask what generativ
e processes
actually went on in its production. -- Did a particular geometer prove a particu
lar theorem in
this way, or in that? Was the sonata composed by following a textbook on sonata-
form? Did
Kekule rely on the then-familiar principles of chemistry to generate his seminal
idea of the
benzene-ring, and if not how did he come up with it? -- To ask how an idea (crea
tive or
otherwise) actually arose, is to ask this type of question.
We can now distinguish first-time novelty from impossibilist originality. A mere
ly novel idea
is one which can be described and/or produced by the same set of generative rule
s as are
other, familiar, ideas. A genuinely original, or radically creative, idea is one
which cannot.
It follows that the ascription of (impossibilist) creativity always involves tac
it or explicit
reference to some specific generative system.
It follows, too, that constraints -- far from being opposed to creativity -- mak
e creativity
possible. To throw away all constraints would be to destroy the capacity for cre
ative
thinking. Random processes alone, if they happen to produce anything interesting
at all, can
result only in first-time curiosities, not radical surprises. (As explained in C
hapter 9,
randomness can sometimes contribute to creativity -- but only in the context of
background
constraints.)
Chapter 4: Maps of the Mind
The definition of (impossibilist) creativity given in Chapter 3 implies that, wi
th respect to
the usual mental processing in the relevant domain (chemistry, poetry, music ...
), a creative
idea may be not just improbable, but impossible. How could it arise, then, if no
t by magic?
And how can one impossible idea be more surprising, more creative, than another?
If an act of
creation is not mere combination, what is it? How can such creativity possibly h
appen?
To understand this, we need to think of creativity in terms of the mapping, expl
oration, and
transformation of conceptual spaces. (The notion of a conceptual space is used i
nformally in
this chapter; later, we see how conceptual spaces can be described more rigorous
ly.) A
conceptual space is a style of thinking. Its dimensions are the organizing princ
iples which
unify, and give structure to, the relevant domain. In other words, it is the gen
erative system
which underlies that domain and which defines a certain range of possibilities:
chess-moves,
or molecular structures, or jazz-melodies.
The limits, contours, pathways, and structure of a conceptual space can be mappe
d by mental
representations of it. Such mental maps can be used (not necessarily consciously
) to explore
-- and to change -- the spaces concerned.
Evidence from developmental psychology supports this view. Children's skills are
at first
utterly inflexible. Later, imaginative flexibility results from "representationa
l
redescriptions" (RRs) of (fluent) lower-level skills (Clark & Karmiloff-Smith, i
n press;
Karmiloff-Smith, 1993). These RRs provide many-levelled maps of the mind, which
are used by
the subject to do things he or she could not do before.
For example, children need RRs of their lower-level drawing-skills in order to d
raw
non-existent, or "funny", objects: a one-armed man, or seven-legged dog. Lacking
such
cognitive resources, a 4-year-old simply cannot spontaneously draw a one-armed m
an, and finds
it very difficult even to copy a drawing of a two-headed man. But 10-year-olds c
an explore
their own man-drawing skill, by using strategies such as distorting, repeating,
omitting, or
mixing parts. These imaginative strategies develop in a fixed order: children ca
n change the
size or shape of an arm before they can insert an extra one, and long before the
y can give the
drawn man wings in place of arms.
The development of RRs is a mapping-exercise, whereby people develop explicit me
ntal
representations of knowledge already possessed implicitly.
Few AI-models of creativity contain reflexive descriptions of their own procedur
es, and/or
ways of varying them. Accordingly, most AI-models are limited to exploring their
conceptual
spaces, rather than transforming them (see Chapters 7 & 8).
Conceptual spaces can be explored in various ways. Some exploration merely shows
us something
about the nature of the relevant conceptual space which we had not explicitly no
ticed before.
When Dickens described Scrooge as "a squeezing, wrenching, grasping, scraping, c
lutching,
covetous old sinner", he was exploring the space of English grammar. He was remi
nding the
reader (and himself) that the rules of grammar allow us to use seven adjectives
before a noun.
That possibility already existed, although its existence may not have been reali
zed by the
reader.
Some exploration, by contrast, shows us the limits of the space, and identifies
specific
points at which changes could be made in one dimension or another. To overcome a
limitation in
a conceptual space, one must change it in some way. One may also change it, of c
ourse, without
yet having come up against its limits. A small change (a "tweak") in a relativel
y superficial
dimension of a conceptual space is like opening a door to an unvisited room in a
n existing
house. A large change (a "transformation"), especially in a relatively fundament
al dimension,
is more like the instantaneous construction of a new house, of a kind fundamenta
lly different
from (albeit related to) the first.
A complex example of structural exploration and change can be found in the devel
opment of
post-Renaissance Western music, based on the generative system known as tonal ha
rmony. From
its origins to the end of the nineteenth century, the harmonic dimensions of thi
s space were
continually tweaked to open up the possibilities (the rooms) implicit in it from
the start.
Finally, a major transformation generated the deeply unfamiliar (yet closely rel
ated) space of
atonality.
Each piece of tonal music has a "home-key", from which it starts, from which (at
first) it did
not stray, and in which it must finish. Reminders of the home-key were constantl
y provided, as
fragments of scales, chords. or arpeggios. As time passed, the range of possible
home-keys
became increasingly well-defined (Bach's "Forty-Eight" was designed to explore,
and clarify,
the tonal range of the well-tempered keys).
Soon, travelling along the path of the home-key alone became insufficiently chal
lenging.
Modulations between keys were then allowed, within the body of the composition.
At first, only
a small number of modulations (perhaps only one, followed by its "cancellation")
were
tolerated, between strictly limited pairs of harmonically-related keys. Over the
years, the
modulations became more daring, and more frequent -- until in the late nineteent
h century
there might be many modulations within a single bar, not one of which would have
appeared in
early tonal music. The range of harmonic relations implicit in the system of ton
ality
gradually became apparent. Harmonies that would have been unacceptable to the ea
rly musicians,
who focussed on the most central or obvious dimensions of the conceptual space,
became
commonplace.
Moreover, the notion of the home-key was undermined. With so many, and so daring
, modulations
within the piece, a "home-key" could be identified not from the body of the piec
e, but only
from its beginning and end. Inevitably, someone (it happened to be Schoenberg) e
ventually
suggested that the convention of the home-key be dropped altogether, since it no
longer
constrained the composition as a whole. (Significantly, Schoenberg suggested new
musical
constraints: using every note in the chromatic scale, for instance.)
However, exploring a conceptual space is one thing: transforming it is another.
What is it to
transform such a space?
One example has just been mentioned: Schoenberg's dropping the home-key constrai
nt to create
the space of atonal music. Dropping a constraint is a general heuristic, or meth
od, for
transforming conceptual spaces. The deeper the generative role of the constraint
in the system
concerned, the greater the transformation of the space. Non-Euclidean geometry,
for instance,
resulted from dropping Euclid's fifth axiom.
Another very general way of transforming conceptual spaces is to "consider the n
egative": that
is, to negate a constraint. One well-known instance concerns Kekule's discovery
of the
benzene-ring. He described it like this:
"I turned my chair to the fire and dozed. Again the atoms were gambolling before
my eyes....
[My mental eye] could distinguish larger structures, of manifold conformation; l
ong rows,
sometimes more closely fitted together; all twining and twisting in snakelike mo
tion. But
look! What was that? One of the snakes had seized hold of its own tail, and the
form whirled
mockingly before my eyes. As if by a flash of lightning I awoke."
This vision was the origin of his hunch that the benzene-molecule might be a rin
g, a hunch
that turned out to be correct. Prior to this experience, Kekule had assumed that
all organic
molecules are based on strings of carbon atoms. But for benzene, the valencies o
f the
constituent atoms did not fit.
We can understand how it was possible for him to pass from strings to rings, as
plausible
chemical structures, if we assume three things (for each of which there is indep
endent
psychological evidence). First, that snakes and molecules were already associate
d in his
thinking. Second, that the topological distinction between open and closed curve
s was present
in his mind. And third, that the "consider the negative" heuristic was present a
lso. Taken
together, these three factors could transform "string" into "ring".
A string-molecule is an open curve: one having at least one end-point (with a ne
ighbour on
only one side). If one considers the negative of an open curve, one gets a close
d curve.
Moreover, a snake biting its tail is a closed curve which one had expected to be
an open one.
For that reason, it is surprising, even arresting ("But look! What was that?").
Kekule might
have had a similar reaction if he had been out on a country walk and happened to
see a snake
with its tail in its mouth. But there is no reason to think that he would have b
een stopped in
his tracks by seeing a Victorian child's hoop. A hoop is a hoop, is a hoop: no t
opological
surprises there. (No topological surprises in a snaky sine-wave, either: so two
intertwined
snakes would not have interested Kekule, though they might have stopped Francis
Crick dead in
his tracks, a century later.)
Finally, the change from open curves to closed ones is a topological change, whi
ch by
definition will alter neighbour-relations. And Kekule was an expert chemist, who
knew very
well that the behaviour of a molecule depends partly on how the constituent atom
s are
juxtaposed. A change in atomic neighbour-relations is very likely to have some c
hemical
significance. So it is understandable that he had a hunch that this tail-biting
snake-molecule
might contain the answer to his problem.
Plausible though this talk of conceptual spaces may be, it is -- thus far -- lar
gely
metaphorical. I have claimed that in calling an idea creative one should specify
the
particular set of generative principles with respect to which it is impossible.
But I have not
said how the (largely tacit) knowledge of literary critics, musicologists, and h
istorians of
art and science might be explicitly expressed within a psychological theory of c
reativity. Nor
have I said how we can be sure that the mental processes specified by the psycho
logist really
are powerful enough to generate such-and-such ideas from such-and-such structure
s.
This is where computational psychology can help us. I noted above, for example,
that
representational redescription develops explicit mental representations of knowl
edge already
possessed implicitly. In computational terms, one could -- and Karmiloff-Smith d
oes -- put
this by saying that knowledge embedded in procedures becomes available, after re
description,
as part of the system's data-structures. Terms like procedures and data-structur
es are well
understood, and help us to think clearly about the mapping and negotiation of co
nceptual
spaces. In general, whatever computational psychology enables us to say, it enab
les us to say
relatively clearly.
Moreover, computational questions can be supplemented by computational models. A
functioning
computer program, in effect, enables the system to use its maps not just to cont
emplate the
relevant conceptual territory, but to explore it actively. So as well as saying
what a
conceptual space is like (by mapping it), we can get some clear ideas about how
it is possible
to move around within it. In addition, those (currently, few) AI-models of creat
ivity which
contain reflexive descriptions of their own procedures, and ways of varying them
, can
transform their own conceptual spaces, as well as exploring them.
The following chapters, therefore, employ a computational approach in discussing
the account
of creativity introduced in Chapters 1-4.
Chapter 5: Concepts of Computation
Computational concepts drawn from "classical" (as well as connectionist) AI can
help us to
think about the nature, form, and negotiation of conceptual spaces. Examples of
such concepts,
most of which were inspired by pre-existing psychological notions in the first p
lace, include
the following: generative system, heuristic (both introduced in previous chapter
s), effective
procedure, search-space, search-tree, knowledge representation, semantic net, sc
ripts, frames,
what-ifs, and analogical representation.
Each of these concepts is briefly explained in Chapter 5, for people who (unlike
BBS-readers)
may know nothing about AI or computational psychology. And they are related to a
wide range of
everyday and historical examples -- some of which will be mentioned again in lat
er chapters.
My main aim, here, is to encourage the reader to use these concepts in consideri
ng specific
cases of human thought. A secondary aim is to blur the received distinction betw
een "the two
cultures". The differences between creativity in art and science lie less in how
new ideas are
generated than in how they are evaluated, once they have arisen. The uses of com
putational
concepts in this chapter are informal, even largely metaphorical. But in bringin
g a
computational vocabulary to bear on a variety of examples, the scene is set for
more detailed
consideration (in Chapters 6-8) of some computer models of creativity.
In Chapter 5, I refer very briefly to a few AI-programs (such as chess-machines
and Schankian
question-answering programs). Only two are discussed at any length: Longuet-Higg
ins' (1987)
work on the perception of tonal harmony, and Gelernter's (1963) geometry (theore
m-proving)
machine.
Longuet-Higgins' work is not intended as a model of musical creativity. Rather,
it provides
(in my terminology) a map of a certain sort of musical space: the system of tona
l harmony
introduced in Chapter 4. In addition, it suggests some ways of negotiating that
space, for it
identifies musical heuristics that enable the listener to appreciate the structu
re of the
composition. Just as speech perception is not the same as speech production, so
appreciating
music is different from composing it. Nevertheless, some of the musical constrai
nts that face
composers working in this particular genre have been identified in this work.
I also mention Longuet-Higgins' recent work on musical expressiveness, but do no
t describe it
here. In (Boden, in press), I say a little more about it. Without expression, mu
sic sounds
"dead", even absurd. In playing the notes in a piano-score, for instance, pianis
ts add such
features as legato, staccato, piano, forte, sforzando, crescendo, diminuendo, ra
llentando,
accelerando, ritenuto, and rubato. But how? Can we express this musical sensibil
ity precisely?
That is, can we specify the relevant conceptual space?
Longuet-Higgins (in preparation), using a computational method, has tried to spe
cify the
musical skills involved in playing expressively. Working with two of Chopin's
piano-compositions, he has discovered some counterintuitive facts. For example,
a crescendo is
not uniform, but exponential (a uniform crescendo does not sound like a crescend
o at all, but
like someone turning-up the volume-knob on a wireless); similarly, a rallentando
must be
exponentially graded (in relation to the number of bars in the relevant section)
if it is to
sound "right". Where sforzandi are concerned, the mind is highly sensitive: as l
ittle as a
centisecond makes a difference between acceptable and clumsy performance.
This work is not a study of creativity. It does not model the exploration of a c
onceptual
space, never mind its transformation. But it is relevant because creativity can
be ascribed to
an idea (including a musical performance) only by reference to a particular conc
eptual space.
The more clearly we can map this space, the more confidently we can identify and
ask questions
about the creativity involved in negotiating it. A pianist whose playing-style s
ounds
"original", or even "idiosyncratic", is exploring and transforming the space of
expressive
skills which Longuet-Higgins has studied.
Gelernter's program, likewise, was not focussed on creativity as such. (It was n
ot even
intended as a model of human psychology.) Rather, it was an early exercise in au
tomatic
problem-solving, in the domain of Euclidean geometry. However, it is well known
that the
program was capable of generating a highly elegant proof (that the base-angles o
f an isosceles
triangle are equal), whose H-creator was the fourth-century mathematician Pappus
.
Or rather, it is widely believed that Gelernter's program could do this. The amb
iguity, not to
say the mistake, arises because the program's proof is indeed the same as Pappus
' proof, when
both are written down on paper in the style of a geometry text-book. But the (cr
eative) mental
processes by which Pappus did this, and by which the modern geometer is able to
appreciate the
proof, were very different from those in Gelernter's program -- which were not c
reative at
all.
Consider (or draw) an isosceles triangle ABC, with A at the apex. You are requir
ed to prove
that the base-angles are equal. The usual method of proving this, which the prog
ram was
expected to employ, is to construct a line bisecting angle BAC, running from A t
o D (a point
on the baseline, BC). Then, the proof goes as follows:
Consider triangles ABD and ACD.
AB = AC (given)
AD = DA (common)
Angle BAD = angle DAC (by construction)
Therefore the two triangles are congruent (two sides and included angle equal)
Therefore angle ABD = angle ACD.
Q.E.D.
By contrast, the Gelernter proof involved no construction, and went as follows:
Consider triangles ABC and ACB.
Angle BAC = angle CAB (common)
AB = AC (given)
AC = AB (given)
Therefore the two triangles are congruent (two sides and included angle equal)
Therefore angle ABC = angle ACB.
Q.E.D.
And, written down on paper, this is the outward form of Pappus' proof, too.
The point, here, is that Pappus' own notes (as well as the reader's geometrical
intuitions)
show that in order to produce or understand this proof, a human being considers
one and the
same triangle rotated (as Pappus put it, lifted up and replaced in the trace lef
t behind by
itself). There were thus two creative aspects of this proof. First, when "congru
ence" is in
question, the geometer normally thinks of two entirely separate triangles (or, s
ometimes, two
distinct triangles having one side in common). Second, Euclidean geometry deals
only with
points, lines, and planes -- so one would expect any proof to be restricted to t
wo spatial
dimensions. But Pappus (and you, when you thought about this proof) imagined lif
ting and
rotating the triangle in the third dimension. He was, if you like, cheating. How
ever, to
transform a rule (an aspect of some conceptual space) is to change it: in effect
, to cheat. In
that sense, transformational creativity always involves cheating.
Gelernter's geometry-program did not cheat -- not merely because it was too rigi
d to cheat in
any way, but also because it could not have cheated in this way. It knew nothing
of the third
dimension. Indeed, it had no visual, analogical, representation of triangles at
all. It
represented a triangle not as a two-dimensional spatial form, but as a list of t
hree letters
(e.g. ABC) naming points in an abstract coordinate space. Similarly, it represen
ted an angle
as a list of three letters naming the vertex and one of the points on each of th
e two rays.
Being unable to inspect triangles visually, it even had to prove that every diff
erent
letter-name for what we can see to be the same angle was equivalent. So it had t
o prove (for
instance) that angle XYZ is the same as angle ZYX, and angle BAC the same as ang
le CAB.
Consequently, this program was incapable not only of coming up with Pappus' proo
f in the way
he did, but even of representing such a proof -- or of appreciating its elegance
and
originality. Its mental maps simply did not allow for the lifting and replacemen
t of triangles
in space (and it had no heuristics enabling it to transform those maps).
How did it come up with its pseudo-Pappus proof, then? Treating the "ABC's" as (
spatially
uninterpreted) abstract vectors, it did a massive brute-search to find the proof
. Since this
brute search succeeded, it did not bother to construct any extra lines.
This example shows how careful one must be in ascribing creativity to a person,
and in
answering the second Lovelace question about a program. We have to consider not
only the
resulting idea, but also the mental processes which gave rise to it. Brute force
search is
even less creative than associative (improbabilist) thinking, and problem-dimens
ions which can
be mapped by some systems may not be representable by others. (Analogously, a th
ree-year old
not showing flexible imagination in drawing a funny man: rather, she is showing
incompetence
in drawing an ordinary man.)
It should not be assumed from the example of Pappus (or Kekule) that visual imag
ery is always
useful in mapping and transforming one's ideas. An example is given of a problem
for which a
visual representation is almost always constructed, but which hinders solution.
Where mental
maps are concerned, visual maps are not always best.
Chapter 6: Creative Connections
This chapter deals with associative creativity: the spontaneous generation of ne
w ideas,
and/or novel combinations of familiar ideas, by means of unconscious processes o
f association.
Examples include not only "mere associations" but also analogies, which may then
be
consciously developed for purposes of rhetorical exposition or problem-solving.
In Chapter 6,
I discuss the initial association of ideas. (The evaluation and use of analogy a
re addressed
in Chapter 7.)
One of the richest veins of associative creativity is poetic imagery. I consider
some specific
examples taken from Coleridge's poem The Ancient Mariner. For this poem (and als
o for his
Kubla Khan), we have unusually detailed information about the literary sources o
f the imagery
concerned. The literary scholar John Livingston Lowes (1951) studied Coleridge's
Notebooks
written while preparing for and writing the poem, and followed up every source m
entioned there
-- and every footnote given in each source. Despite the enormous quantity and ra
nge of
Coleridge's reading, Lowes makes a subtle, and intuitively compelling, case in i
dentifying
specific sources for the many images in the poem.
However, an intuitively compelling case is one thing, and an explicit justificat
ion or
detailed explanation is another. Lowes took for granted that association can hap
pen (he used
Coleridge's term: the hooks and eyes of memory), without being able to say just
how these
hooks and eyes can come together. I argue that connectionism, and specifically P
DP (parallel
distributed processing), can help us to understand how such unexpected associati
ons are
possible.
Among the relevant questions to which PDP-models offer preliminary answers are t
he following:
How can ideas from very different sources (such as Captain Cook's diaries and Pr
iestley's
writings on optics) be spontaneously thought of together? How can two ideas be m
erged to
produce a new structure, which shows the influence of both ancestor-ideas withou
t being a mere
"cut-and-paste" combination? How can the mind be "primed" (for instance, by the
decision to
write a poem about a seaman), so that one will more easily notice serendipitous
ideas? Why may
someone notice -- and remember -- something fairly uninteresting (such as a word
in a literary
text), if it occurs in an interesting context? How can a brief phrase conjure up
from memory
an entire line or stanza, from this or some other poem? And how can we accept tw
o ideas as
similar (the words "love" and "prove" as rhyming, for instance) in respect of a
feature not
identical in both?
The features of connectionist models which suggest answers to these questions ar
e their powers
of pattern-completion, graceful degradation, sensitization, multiple constraint-
satisfaction,
and "best-fit" equilibration. The computational processes underlying these featu
res are
described informally in Chapter 6 (I assume that it is not necessary to do so fo
r
BBS-readers).
The message of this chapter is that the unconscious, "insightful", associative a
spects of
creativity can be explained -- in outline, at least -- in computational terms. C
onnectionism
offers some specific suggestions about what sorts of processes may underlie the
hooks and eyes
of memory.
This is not to say, however, that all aspects of poetry -- or even all poetic im
agery -- can
be explained in this way. Quite apart from the hierarchical structure of natural
language
itself, some features of a poem may require thinking of a type more suited (at p
resent) to
symbolic models. For example, Coleridge's use of "The Sun came up upon the left"
and "The Sun
now rose upon the right" as the opening-lines of two closely-situated stanzas en
abled him to
indicate to the reader that the ship was circumnavigating the globe, without hav
ing to detail
all the uneventful miles of the voyage. (Compare Kubrick's use of the spinning t
high-bone
turning into a space-ship, as a highly compressed history of technology, in his
film 2001, A
Space Odyssey.) But these expressions, too, were drawn from his reading -- in th
is case, of
the diaries of the very early mariners, who recorded their amazement at first ex
periencing the
sunrise in the "wrong" part of the sky. Associative memory was thus involved in
this poetic
conceit, but it is not the entire explanation.
Chapter 7: Unromantic Artists
This chapter and the next describe and criticize some existing computer models o
f creativity.
The separation into "artists" (Chapter 7) and "scientists" (Chapter 8) is to som
e extent an
arbitrary rhetorical device. For example, analogy (discussed in Chapter 7) and i
nduction and
genetic algorithms (both outlined in Chapter 8) are all relevant to creativity i
n arts and
sciences alike. In these two chapters, the second and third Lovelace-questions -
- about
apparent computer-creativity -- are addressed at length. However, the first Love
lace question,
relating to human creativity, is still the over-riding concern.
The computer models of creativity discussed in Chapter 7 include: a series of pr
ograms which
produce line-drawings (McCorduck, 1991); a jazz-improviser (Johnson-Laird, 1991)
; a
haiku-writer (Masterman & McKinnon Wood, 1968); two programs for writing stories
(Klein et
al., 1973; Meehan, 1981); and two analogy-programs (Chalmers, French, & Hofstadt
er, 1991;
Holyoak & Thagard, 1989a, 1989b; Mitchell, 1993). In each case, the programmer h
as to try to
define the dimensions of the relevant conceptual space, and to specify ways of e
xploring the
space, so as to generate novel structures within it. Some evaluation, too, must
be allowed
for. In the systems described in this chapter, the evaluation is built into the
generative
procedures, rather than being done post hoc. (This is not entirely unrealistic:
although
humans can evaluate -- and modify -- their own ideas once they have produced the
m, they can
also develop domain-expertise such that most of their ideas are acceptable witho
ut
modification.)
Sometimes, the results are comparable with non-trivial human achievements. Thus
some of the
computer's line-drawings are spontaneously admired, by people who are amazed whe
n told their
provenance. The haiku-program can produce acceptable poems, sometimes indistingu
ishable from
human-generated examples (however, this is due to the fact that the minimalist h
aiku-style
demands considerable projective interpretation by the reader). And the jazz-prog
ram can play
-- composing its own chord-sequences, as well as improvising on them -- at about
the level of
a moderately competent human beginner. (Another jazz-improviser, not mentioned i
n the book,
plays at the level of a mediocre professional musician; unlike the former exampl
e, it starts
out with significant musical structure provided to it "for free" by the human us
er (Hodgson,
1990).)
At other times, the results are clumsy and unconvincing, involving infelicities
and
absurdities of various kinds. This often happens when stories are computer-gener
ated. Here,
many rich conceptual spaces have to be negotiated simultaneously. Quite apart fr
om the
challenge of natural language generation, the model must produce sensible plots,
taking
account both of the motivation and action of the characters and of their common-
sense
knowledge. Where very simple plot-spaces, and very limited world-knowledge, are
concerned, a
program may be able (sometimes) to generate plausible stories.
One, for example, produces Aesop-like tales, including a version of "The Fox and
the Crow"
(Meehan, 1981). A recent modification of this program (Turner, 1992), not covere
d in the book,
is more subtle. It uses case-based reasoning and case-transforming heuristics to
generate
novel stories based on familiar ones; and because it distinguishes the author's
goals from
those of the characters, it can solve meta-problems about the story as well as p
roblems posed
within it. But even this model's story-telling powers are strictly limited, comp
ared with
ours.
Models dealing with the interpretation of stories, and of concepts (such as betr
ayal) used in
stories, are also relevant here. Computational definitions of interpersonal them
es and scripts
(Abelson, 1973), programs that can answer questions about (simple) stories and m
odels which
can -- up to a point -- interpret motivational and emotional structures within a
story (Dyer,
1983) are all discussed.
So, too, is a program that generates English text describing games of noughts-an
d-crosses
(Davey, 1978). The complex syntax of the sentences is nicely appropriate to the
structure of
the particular game being described. Human writers, too, often use subtleties of
syntax to
convey certain aspects of their story-lines.
The analogy programs described in Chapter 7 are ACME and ARCS (Holyoak & Thagard
, 1989a,
1989b), and in the Preface to the paperback edition I add a discussion of Copyca
t (Chalmers et
al., 1991; Mitchell, 1993), which I had originally intended to highlight in the
main text.
ACME and ARCS are an analogy-interpreter and an analogy-finder, respectively. Ca
lling on a
semantic net of over 30,000 items, to which items can be added by the user, thes
e programs use
structural, semantic, and pragmatic criteria to evaluate analogies between conce
pts (whose
structure is pre-given by the programmers). Other analogy programs (e.g. Falkenh
ainer, Forbus,
& Gentner, 1989) use structural and semantic similarity as criteria. But ARCS/AC
ME takes
account also of the pragmatic context, the purpose for which the analogy is bein
g sought. So a
conceptual feature may be highlighted in one context, and downplayed in another.
The context
may be one of rhetoric or poetic imagery, or one of scientific problem-solving (
ARCS/ACME
forms part of an inductive program that compares the "explanatory coherence" of
rival
scientific theories (Thagard, 1992)). Examples of both types are discussed.
The point of interest about Copycat is that it is a model of analogy in which th
e structure of
the analogues is neither pre-assigned nor inflexible. The description of somethi
ng can change
as the system searches for an analogy to it, and its "perception" of an analogue
may be
permanently influenced by having seen it in a particular analogical relation to
something
else. Many analogies in the arts and sciences can be cited, to show that the sam
e is true of
the human mind.
Among the points of general interest raised in this chapter is the inability of
these programs
(Copycat excepted) to reflect on what they have done, or to change their way of
doing it.
For instance, the line-drawing program that draws human acrobats in broadly real
istic poses is
unable to draw one-armed acrobats. It can generate acrobats with only one arm vi
sible, if one
arm is occluded by another acrobat in front. But that there might be a one-armed
(or a
six-armed) acrobat is strictly inconceivable. The reason is that the program's k
nowledge of
human anatomy does not represent the fact that humans have two arms in a form wh
ich is
separable from its drawing-procedures or modifiable by "imaginative" heuristics.
It does not,
for instance, contain anything of the form "Number of arms: 2", which might then
be
transformed by a "vary the variable" heuristic into "Number of arms: 1". Much as
the
four-year-old child cannot draw a "funny" one-armed man because she has not yet
developed the
necessary RR of her own man-drawing skill, so this program cannot vary what it d
oes because --
in a clear sense -- it does not know what it is that it is doing.
This failing is not shared by all current programs: some featured in the next ch
apter can
evaluate their own ideas, and transform their own procedures, to some extent. Mo
reover, this
failure is "bad news" only to those seeking a positive answer to the second and
third Lovelace
questions. It is useful to anyone asking the first Lovelace question, for it und
erlines the
importance of the factors introduced in Chapter 4: reflexive mapping of thought,
evaluation of
ideas, and transformation of conceptual spaces.
Chapter 8: Computer-Scientists
Like analogy, inductive thinking occurs across both arts and science. Chapter 8
begins with a
discussion of the ID3 algorithm. This is used in many learning programs, includi
ng a
world-beater -- better than the human expert who "taught" it -- at diagnosing so
ybean diseases
(Michalski & Chilausky, 1980).
ID3 learns from examples. It looks for the logical regularities which underlie t
he
classification of the input examples, and uses them to classify new, unexamined,
examples.
Sometimes, it finds regularities of which the human experts were unaware, such a
s unknown
strategies for chess endgames (Michie & Johnston, 1984). In short, ID3 can not o
nly define
familiar concepts in H-creative ways, but can also define H-creative concepts.
However, all the domain-properties it considers have to be specifically mentione
d in the
input. (It does not have to be told just which input properties are relevant: in
the chess
end-game example, the chess-masters "instructing" the program did not know this.
) That is,
ID3-programs can restructure their conceptual space in P-creative -- and even H-
creative --
ways. But they cannot change the dimensions of the space, so as to alter its fun
damental
nature.
Another program capable of H-discovery is meta-DENDRAL, an early expert system d
evoted to the
spectroscopic analysis of a certain group of organic molecules. The original pro
gram, DENDRAL,
uses exhaustive search to describe all possible molecules made up of a given set
of atoms, and
heuristics to suggest which of these might be chemically interesting. DENDRAL us
es only the
chemical rules supplied to it, but meta-DENDRAL can find new rules about how the
se compounds
decompose. It does this by identifying unfamiliar patterns in the spectrographs
of familiar
compounds, and suggesting plausible explanations for them. For instance, if it d
iscovers a
smaller structure located near the point at which a molecule breaks, it may sugg
est that other
molecules containing that sub-structure may break at these points too.
This program is H-creative, up to a point. It not only explores its conceptual s
pace (using
evaluative heuristics and exhaustive search) but enlarges it too, by adding new
rules. It
generates hunches, which have led to the synthesis of novel, chemically interest
ing,
compounds. And it has discovered some previously unsuspected rules for analysing
several
families of organic molecules. However it relies on sophisticated theories built
into it by
expert chemists (which is why its novel hypotheses, though sometimes false, are
always
plausible). It casts no light on how those theories might have arisen in the fir
st place.
Some computational models of induction were developed with an eye to the history
of science
(and to psychology), rather than for practical scientific puzzle-solving. Their
aim was not to
come up with H-creative ideas, but to P-create in the same way as human H-creato
rs. Examples
include BACON, GLAUBER, STAHL, and DALTON (Langley, Simon, Bradshaw, & Zytkow, 1
987), whose
P-creative activities are modelled on H-creative episodes recorded in the notebo
oks of human
scientists.
BACON induces quantitative laws from empirical data. Its data are measurements o
f various
properties at different times. It looks for simple mathematical functions defini
ng invariant
relations between numerical data-sets. For instance, it seeks direct or inverse
proportionalities between measurements, or between their products or ratios. It
can define
higher-level theoretical terms, construct new units of measurement, and use math
ematical
symmetry to help find invariant patterns in the data. It can cope with noisy dat
a, finding a
best-fit function (within predefined limits). BACON has P-created many physical
laws,
including Archimedes' principle, Kepler's third law, Boyle's law, Ohm's law, and
Black's law.
GLAUBER discovers qualitative laws, summarizing the data by classifying things a
ccording to
(non-measurable) observable properties. Thus it discovers relations between acid
s, alkalis,
and bases (all identified in qualitative terms). STAHL analyses chemical compoun
ds into their
elements. Relying on the data-categories presented to it, it has modelled aspect
s of the
historical progression from phlogiston-theory to oxygen-theory. DALTON reasons a
bout atoms and
molecular structure. Using early atomic theory, it generates plausible molecular
structures
for a given set of components (it could be extended to cover other componential
theories, such
as particle physics or Mendelian genetics).
These four programs have rediscovered many scientific laws. However, their P-cre
ativity is
shallow. They are highly data-driven, their discoveries lying close to the evide
nce. They
cannot identify relevance for themselves, but are "primed" with appropriate expe
ctations.
(BACON expects to find linear relationships, and rediscovered Archimedes' princi
ple only after
being told that things can be immersed in known volumes of liquid and the result
ing volume
measured.) They cannot model spontaneous associations or analogies, only deliber
ate reasoning.
Some can suggest experiments, to test hypotheses they have P-created, but they h
ave no sense
of the practices involved. They can learn, constructing P-novel concepts used to
make further
P-discoveries. But their discoveries are exploratory rather than transformationa
l: they cannot
fundamentally alter their own conceptual spaces.
Some AI-models of creativity can do this, to some extent. For instance, the Auto
matic
Mathematician (AM) explores and transforms mathematical ideas (Lenat, 1983). It
does not prove
theorems, or do sums, but generates "interesting" mathematical ideas (including
expressions
that might be provable theorems). It starts with 100 primitive concepts of set-t
heory (such as
set, list, equality, and ordered pair), and 300 heuristics that can examine, com
bine,
transform, and evaluate its concepts. One generates the inverse of a function (c
ompare
"consider the negative"). Others can compare, generalize, specialize, or find ex
amples of
concepts. Newly-constructed concepts are fed back into the pool.
In effect, AM has hunches: its evaluation heuristics suggest which new structure
s it should
concentrate on. For example, AM finds it interesting whenever the union of two s
ets has a
simply expressible property which is not possessed by either of them (a set-theo
retic version
of the notion that emergent properties are interesting). Its value-judgments are
often wrong.
Nevertheless, it has constructed some powerful mathematical notions, including p
rime numbers,
Goldbach's conjecture, and an H-novel theorem concerning maximally-divisible num
bers (which
the programmer had never heard of). In short, AM appears to be significantly P-c
reative, and
slightly H-creative too.
However, AM has been criticised (Haase, 1986; Lenat & Seely-Brown, 1984; Ritchie
& Hanna,
1984; Rowe &Partridge, 1993). Critics have argued that some heuristics were incl
uded to make
certain discoveries, such as prime numbers, possible; that the use of LISP provi
ded AM with
mathematical relevance "for free", since any syntactic change in a LISP expressi
on is likely
to result in a mathematically-meaningful string; that the program's exploration
was too often
guided by the human user; and that AM had fixed criteria of interest, being unab
le to adapt
its values. The precise extent of AM's creativity, then, is unclear.
Because EURISKO has heuristics for changing heuristics, it can transform not onl
y its stock of
concepts but also its own processing-style. For example, one heuristic asks whet
her a rule has
ever led to any interesting result. If it has not (but has been used several tim
es), it will
be less often used in future. If it has occasionally been helpful, though usuall
y worthless,
it may be specialized in one of several different ways. (Because it is sometimes
useful and
sometimes not, the specializing-heuristic can be applied to itself.) Other heuri
stics
generalize rules, or create new rules by analogy with old ones. Using domain-spe
cific
heuristics to complement these general ones, EURISKO has generated H-novel ideas
in genetic
engineering and VLSI-design (one has been patented, so was not "obvious to a per
son skilled in
the art").
Other self-transforming systems described in this chapter are problem-solving pr
ograms based
on genetic algorithms (GAs). GA-systems have two main features. They all use rul
e-changing
algorithms (mutation and crossover) modelled on biological genetics. Mutation ma
kes a random
change in a single rule. Crossover mixes two rules, so that (for instance) the l
efthand
portion of one is combined with the righthand portion of the other; the break-po
ints may be
chosen randomly, or may reflect the system's sense of which rule-parts are the m
ost useful.
Most GA-systems also include algorithms for identifying the relatively successfu
l rules, and
rule-parts, and for increasing the probability that they will be selected for "b
reeding"
future generations. Together, these algorithms generate a new system, better ada
pted to the
task.
An example cited in the book is an early GA-program which developed a set of rul
es to regulate
the transmission of gas through a pipeline (Holland, Holyoak, Nesbitt, & Thagard
, 1986). Its
data were hourly measurements of inflow, outflow, inlet-pressure, outlet-pressur
e, rate of
pressure-change, season, time, date, and temperature. It altered the inlet-press
ure to allow
for variations in demand, and inferred the existence of accidental leaks in the
pipeline
(adjusting the inflow accordingly).
Although the pipeline-program discovered the rules for itself, the potentially r
elevant
data-types were given in its original list of concepts. How far that compromises
its
creativity is a matter of judgment. No system can work from a tabula rasa. Likew
ise, the
selectional criteria were defined by the programmer, and do not alter. Humans ma
y be taught
evaluative criteria, too. But they can sometimes learn -- and adapt -- them for
themselves.
GAs, or randomizing thinking, are potentially relevant to art as well as to scie
nce --
especially if the evaluation is done interactively, not automatically. That is,
at each
generation the selection of items from which to breed for the next generation is
done by a
human being. This methodology is well-suited to art, where the evaluative criter
ia are not
only controversial but also imprecise -- or even unknown. Two recent examples (n
ot mentioned
in the book, but described in: Boden, in press) concern graphics (Sims, 1991; To
dd & Latham,
1993). Sims' aim is to provide an interactive environment for graphic artists, e
nabling them
to generate otherwise unimaginable images. Latham's is to produce his own art-wo
rks, but he
too uses the computer to generate images he could not have developed unaided.
In a run of Sims' GA-system, the first image is generated at random. Then the pr
ogram makes
various independent random mutations in the image-generating rule, and displays
the resulting
images. The human now chooses one image to be mutated, or two to be "mated", and
the process
is repeated. The program can transform its image-generating code (simple LISP-fu
nctions) in
many ways. It can alter parameters in pre-existing functions, combine or separat
e functions,
or nest one function inside another (so many-levelled hierarchies can arise).
Many of Sims' computer-generated images are highly attractive, even beautiful. M
oreover, they
often cause a deep surprise. The change(s) between parent and offspring are some
times amazing.
The one appears to be a radical transformation of the other -- or even something
entirely
different. In short, we seem to have an example of impossibilist creativity.
Latham's interactive GA-program is much more predictable. Its mutation operators
can change
only the parameters within the image-generating code, not the body of the functi
on.
Consequently, it never comes up with radical novelties. All the offspring in a g
iven
generation are obviously siblings, and obviously related to their parents. So th
e results of
Latham's system are less exciting than Sims'. But it is arguably even more relev
ant to
artistic creativity.
The interesting comparison is not between the aesthetic appeal of a typical Lath
am-image and
Sims-image, but between the discipline -- or lack of it -- which guides the expl
oration and
transformation of the relevant visual space. Sims is not aiming for particular t
ypes of
result, so his images can be fundamentally transformed in random ways at every g
eneration. But
Latham (a professional artist) has a sense of what forms he hopes to achieve, an
d specific
aesthetic criteria for evaluating intermediate steps. Random changes at the marg
ins are
exploratory, and may provide some useful ideas. But fundamental transformations
-- especially,
random ones -- would be counterproductive. (If they were allowed, Latham would w
ant to pick
one and then explore its possibilities in a disciplined way.)
This fits the account of (impossibilist) creativity given in Chapters 3 and 4. C
reativity
works within constraints, which define the conceptual spaces with respect to whi
ch it is
identified. Maps or RRs (or LISP-functions) which describe the parameters and/or
the major
dimensions of the space can be altered in specific ways, to generate new, but re
lated, spaces.
Random changes are sometimes helpful, but only if they are integrated into the r
elevant style.
Art, like science, involves discipline. Only after a space has been fairly thoro
ughly explored
will the artist want to transform it in deeply surprising ways. A convincing com
puter-artist
would therefore need not only randomizing operators, but also heuristics for con
straining its
transformations and selections in an aesthetically acceptable fashion. In additi
on, it would
need to make its aesthetic selections (and perhaps guiding recommendations) for
itself. And,
to be true to human creativity, the evaluative rules should evolve also (Elton,
1993).
Chapter 9: Chance, Chaos, Randomness, Unpredictability
Unpredictability is often said to be the essence of creativity. And creativity i
s, by
definition, surprising. But unpredictability is not enough. At the heart of crea
tivity, as
previous chapters have shown, lie constraints: the very opposite of unpredictabi
lity.
Constraints and unpredictability, familiarity and surprise, are somehow combined
in original
thinking.
In this chapter, I distinguish various senses of "chance", "chaos", "randomness"
, and
"unpredictability". I also argue that a scientific explanation need not imply ei
ther
determinism or predictability, and that even deterministic systems may be unpred
ictable.
Below, it will suffice to mention a number of different ways in which unpredicta
bility can
enter into creativity.
The first follows from the fact that creative constraints do not determine every
thing about
the newly-generated idea. A style of thinking typically allows for many points a
t which two or
more alternatives are possible. Several notes may be both melodious and harmonio
us; many words
rhyme with moon; and perhaps there could be a ring-molecule with three, or five,
atoms in the
ring? At these points, some specific choice must be made. Likewise, many explora
tory and
transformational heuristics may be potentially available at a certain time, in d
ealing with a
given conceptual space. But one or other must be chosen. Even if several heurist
ics can be
applied at once (like parallel mutations in a GA-system), not all possibilities
can be
simultaneously explored. The choice has to be made, somehow.
Occasionally, the choice is random, or as near to random as one can get. So it m
ay be made by
throwing a dice (as in playing Mozart's aleatory music); or by consulting a tabl
e of random
numbers (as in the jazz-program); or even, possibly, as a result of some sudden
quantum-jump
inside the brain. There may even be psychological processes akin to GA-mechanism
s, producing
novel ideas in human minds.
More often, the choice is fully determined, by something which bears no systemat
ic relation to
the conceptual space concerned. (Some examples are given below.) Relative to tha
t style of
thinking, the choice is made randomly. Certainly, nothing within the style itsel
f could enable
us to predict its occurrence.
In either case, the choice must somehow be skilfully integrated into the relevan
t mental
structure. Without such disciplined integration, it cannot lead to a positively
valued,
interesting, idea. With the help of this mental discipline, even flaws and accid
ents may be
put to creative use. For instance, a jazz-drummer suffering from Tourette's synd
rome is
subject to sudden, uncontrollable, muscular tics, even when he is drumming. As a
result, his
drumsticks sometimes make unexpected sounds. But his musical skill is so great t
hat he can
work these supererogatory sounds into his music as he goes along. At worst, he "
covers up" for
them. At best, he makes them the seeds of unusual improvisations which he could
not otherwise
have thought of.
One might even call the drummer's tics serendipitous. Serendipity is the unexpec
ted finding of
something one was not specifically looking for. But the "something" has to be so
mething which
was wanted, or at least which can now be used. Fleming's discovery of the dirty
petri-dish,
infected by Penicillium spores, excited him because he already knew how useful a
bactericidal
agent would be. Proust's madeleine did not answer any currently pressing questio
n, but it
aroused a flood of memories which he was able to use as the trigger of a life-lo
ng project.
Events such as these could not have been foreseen. Both trigger and triggering w
ere
unpredictable. Who was to say that the dish would be left uncovered, and infecte
d by that
particular organism? And who could say that Proust would eat a madeleine on that
occasion?
Even if one could do this (perhaps the laboratory was always untidy, and perhaps
Proust was
addicted to madeleines), one could not predict the effect the trigger would have
on these
individual minds.
This is so even if there are no absolutely random events going on in our brains.
Chaos theory
has taught us that fully deterministic systems can be, in practice, unpredictabl
e. Our
inescapable ignorance of the initial conditions means that we cannot forecast th
e weather,
except in highly general (and short-term) ways. The inner dynamics of the mind a
re more
complex than those of the weather, and the initial conditions -- each person's i
ndividual
experiences, values, and beliefs -- are even more varied. Small wonder, then, if
we cannot
fully foresee the clouds of creativity in people's minds.
To some extent, however, we can. Different thinkers have differing individual st
yles, which
set a characteristic stamp on all their work in a given domain. Thus Dr. Johnson
complained,
"Who but Donne would have compared a good man to a telescope?". Authorial signat
ures are
largely due to the fact that people can employ habitual ways of making "random"
choices. There
may be nothing to say, beforehand, how someone will choose to play the relevant
game. But
after several years of practice, their "random" choices may be as predictable as
anything in
the basic genre concerned.
More mundane examples of creativity, which are P-creative but not H-creative, ca
n sometimes be
predicted -- and even deliberately brought about. Suppose your daughter is havin
g difficulty
mastering an unfamiliar principle in her physics homework. You might fetch a gad
get that
embodies the principle concerned, and leave it on the kitchen-table, hoping that
she will play
around with it and realise the connection for herself. Even if you have to drop
a few hints,
the likelihood is that she will create the central idea. Again, Socratic dialogu
e helps people
to explore their conceptual spaces in (to them) unexpected ways. But Socrates hi
mself, like
those taking his role today, knew what P-creative ideas to expect from his pupil
s.
We cannot predict creative ideas in detail, and we never shall be able to do so.
Human
experience is too richly idiosyncratic. But this does not mean that creativity i
s
fundamentally mysterious, or beyond scientific understanding.
Chapter 10: Elite or Everyman?
Creativity is not a single capacity, and nor is it a special one. It is an aspec
t of
intelligence in general, which involves many different capacities: noticing, rem
embering,
seeing, speaking, classifying, associating, comparing, evaluating, introspecting
, and the
like. Chapter 10 offers evidence for this view, drawing on the work of Perkins (
1981) and also
on computational work of various kinds.
For example, Kekule's description of "long rows, twining and twisting in snakeli
ke motion",
where "one of the snakes had seized hold of its own tail", assumes everyday powe
rs of visual
interpretation and analogy. These capacities are normally taken for granted in d
iscussions of
Kekule's H-creativity, but they require some psychological explanation. Relevant
computational
work on low-level vision suggests that Kekule's imagery was grounded in certain
specific, and
universal, visual capacities -- including the ability to identify lines and end-
points. (His
hunch, by contrast, required special expertise. As remarked in Chapter 4, only a
chemist could
have realized the potential significance of the change in neighbour-relations ca
used by the
coalescence of end-points, or the "snake" which "seized hold of its tail".)
Similarly, Mozart's renowned musical memory, and his reported capacity for heari
ng a whole
symphony "all at once", can be related to computational accounts of powers of me
mory and
comprehension common to us all. Certainly, his musical expertise was superior in
many ways. He
had a better grasp of the conceptual spaces concerned, and a better understandin
g -- better
even than Salieri's -- of how to explore them so as to locate their farthest noo
ks and
crannies. (Unlike Haydn, for example, he was not a composer who made adventurous
transformations). But much of Mozart's genius may have lain in the better use, a
nd the vastly
more extended practice, of facilities we all share.
Much -- but perhaps not all. Possibly, there was something special about Mozart'
s brain which
predisposed him to musical genius (Gardner, 1983). However, we have little notio
n, at present,
of what this could be. It may have been some cerebral detail which had the emerg
ent effect of
giving him greater musical powers. For example, the jazz-improvisation program d
escribed in
Chapter 7 employed only very simple rules to improvise, because its short-term m
emory was
deliberately constrained to match the limited STM of people. Human jazz-musician
s cannot
improvise hierarchically nested chord-sequences "on the fly", but have to compos
e (or
memorize) them beforetimes. A change in the range of STM might enable someone to
improvise and
appreciate musical structures of a complexity not otherwise intelligible. But th
is musically
significant change might be due to an apparently "boring" feature of the brain.
Many other examples of creativity (drawn, for instance, from poetry, painting, m
usic, and
choreography) are cited in this chapter. They all rely on familiar capacities fo
r their
effect, and arguably for their occurrence too. We appreciate them intuitively, a
nd normally
take their accessibility -- and their origins -- for granted. But psychological
explanations
in computational terms may be available, at least in outline.
The role of motivation and emotion is briefly mentioned, but is not a prime them
e. This is not
because motivation and emotion are in principle outside the reach of a computati
onal
psychology. Some attempts have been made to bring these matters within a computa
tional account
of the mind (e.g. Boden, 1972; Sloman, 1987). But such attempts provide outline
sketches
rather than functioning models. Still less is it because motivation is irrelevan
t to
creativity. But the main topic of the book is how (not why) novel ideas arise in
human minds.
Chapter 11: Of Humans and Hoverflies
The final chapter focusses on two questions. One is the fourth Lovelace question
: could a
computer really be creative? The other is whether any scientific explanation of
creativity,
whether computational or not, would be dehumanizing in the sense of destroying o
ur wonder at
it -- and at the human mind in general.
With respect to the fourth Lovelace question, the answer "No" may be defended in
at least four
different ways. I call these the brain-stuff argument, the empty-program argumen
t, the
consciousness argument, and the non-human argument. Each of these applies to int
elligence (and
intentionality) in general, not just to creativity in particular.
The brain-stuff argument (Searle, 1980) claims that whereas neuroprotein is a ki
nd of stuff
which can support intelligence, metal and silicon are not. This empirical claim
is conceivably
correct, but we have no specific reason to believe it. Moreover, the associated
claim -- that
it is intuitively obvious that neuroprotein can support intentionality and that
metal and
silicon cannot -- must be rejected.
Intuitively speaking, that neuroprotein supports intelligence is utterly mysteri
ous: how could
that grey mushy stuff inside our skulls have anything to do with intentionality?
Insofar as we
understand this, we do so because of various functions that nervous tissue makes
possible (as
the sodium pump enables action potentials, or "messages", to pass along an axon)
. Any material
substrate capable of supporting all the relevant functions could act as the embo
diment of
mind. Whether neurochemistry describes the only such substrate is an empirical q
uestion, not
to be settled by intuitions.
The empty-program argument is Searle's (1980) claim that a computational psychol
ogy cannot
explain understanding, because programs are all syntax and no semantics: their s
ymbols are
utterly meaningless to the computer itself. I reply that a computer program, whe
n running in a
computer, has proto-semantic (causal) properties, in virtue of which the compute
r does things
-- some of which are among the sorts of thing which enable understanding in huma
ns and animals
(Boden, 1988, ch. 8; Sloman, 1986). (This is not to say that any computer-artefa
ct could
possess understanding in the full sense, or what I have termed "intrinsic intere
sts", grounded
in evolutionary history (Boden, 1972).)
The consciousness argument is that no computer could be conscious, and therefore
-- since
consciousness is needed for the evaluation phase, and even for much of the prepa
ration phase
-- no computer can be creative. I reply that it's not obvious that evaluation mu
st be carried
out consciously. A creative computer might recognize (evaluate) its creative ide
as by using
relevant reflexive criteria without also having consciousness. Moreover, some as
pects of
consciousness can be illuminated by a computational account, although admittedly
"qualia"
present an unsolved problem. The question must remain open -- not just because w
e do not know
the answer, but because we do not clearly understand how to ask the question.
According to the non-human argument, to regard computers as truly intelligent is
not a mere
factual mistake, but a moral absurdity: only members of the human, or animal, co
mmunity should
be granted moral and epistemological consideration (of their interests and opini
ons). If we
ever agreed to remove all the scare-quotes around the psychological words we use
in describing
computers, so inviting them to join our human community, we would be committed t
o respecting
their goals and judgments. This would not be a purely factual matter, but one of
moral and
political choice -- about which it is impossible to legislate now.
In short, each of the four negative replies to the last Lovelace question is cha
llengeable.
But even someone who does accept a negative answer here can consistently accept
positive
answers to the first three Lovelace questions. The main argument of the book rem
ains
unaffected.
The second theme of this final chapter is the question whether, where creativity
is in
question, scientific explanation in general should be spurned. Many people, from
Blake to
Roszak, have seen the natural sciences as dehumanizing in various ways. Three ar
e relevant
here: the ignoring of mentalistic concepts, the denial of cherished beliefs, and
the
destructive demystification of some valued phenomena.
The natural sciences have had nothing to say about psychological phenomena as su
ch; and
scientifically-minded psychologists have often conceptualized them in reductioni
st (e.g.
behaviourist, or physiological) terms. To ignore something is not necessarily to
deny it. But,
given the high status of the natural sciences, the fact that they have not dealt
with the mind
has insidiously downplayed its importance, if not its very existence.
This charge cannot be levelled at computational psychology, however. Intentional
concepts,
such as representation, lie at the heart of it, and of AI. Some philosophers cla
im that these
sciences have no right to use such terms. Even so, they cannot be accused of del
iberately
ignoring intentional phenomena, or of rejecting intentionalist vocabulary.
The second charge of dehumanization concerns what science explicitly denies. Som
e scientific
theories have rejected comforting beliefs, such as geocentrism, special creation
, or rational
self-control. But a scientific psychology need not -- and a computational psycho
logy does not
-- deny creativity, as astronomy denies geocentrism. On the contrary, the preced
ing chapters
have acknowledged creativity again and again. Even to say that it rests on unive
rsal features
of human minds is not to deny that some ideas are surprising, and special, requi
ring
explanation of how they could possibly arise.
However, the humanist's worry concerns not only denial by rejection, but also de
nial by
explanation. The crux of the third type of anti-scientific resistance is the fee
ling that
scientific explanation of any kind must drive out wonder: that to explain someth
ing is to
cease to marvel at it. Not only do we wonder at creativity, but positive evaluat
ion is
essential to the concept. So it may seem that to explain creativity is insidious
ly to
downgrade it -- in effect, to deny it.
Certainly, many examples can be given where understanding drives out wonder. For
instance, we
may marvel at the power of the hoverfly to fly to its mate hovering nearby (so a
s to mate in
mid-air). Many people might be tempted to describe the hoverfly's activities in
terms of its
goals and beliefs, and perhaps even its determination in going straight to its m
ate without
any coyness or prevarication. How wonderful is the mind of the humble hoverfly!
In fact, the hoverfly's flight-path is determined by a simple and inflexible rul
e, hardwired
into its brain. This rule transforms a specific visual signal into a specific mu
scular
response. The fly's initial change of direction depends on the particular approa
ch-angle
subtended by the target-fly. The creature, in effect, always assumes that the si
ze and
velocity of the seen target (which may or may not be a fly) are those correspond
ing to
hoverflies. When initiating a new flight-path, the fly's angle of turn is select
ed on this
rigid, and fallible, basis. Moreover, the fly's path cannot be adjusted in midfl
ight, there
being no way in which it can be influenced by feedback from the movement of the
target animal.
This evidence must dampen the enthusiasm of anyone who had marvelled at the psyc
hological
subtlety of the hoverfly's behaviour. The insect's intelligence has been demysti
fied with a
vengeance, and it no longer seems worthy of much respect. One may see beauty in
the
evolutionary principles that enabled this simple computational mechanism to deve
lop, or in the
biochemistry that makes it function. But the fly itself cannot properly be descr
ibed in
anthropomorphic terms. Even if we wonder at evolution, and at insect-neurophysio
logy, we can
no longer wonder at the subtle mind of the hoverfly.
Many people fear that this disillusioned denial of intelligence in the hoverfly
is a foretaste
of what science will say about our minds too. A few "worrying" examples can inde
ed be given:
for instance, think of how perceived sexual attractiveness turns out to relate t
o pupil-size.
In general, however, this fear is mistaken. The mind of the hoverfly is much les
s marvellous
than we had imagined, so our previous respect for the insect's intellectual prow
ess is shown
up as mere ignorant sentimentality. But computational explanations of thinking c
an increase
our respect for human minds, by showing them to be much more complex and subtle
than we had
previously recognized.
Consider, for instance, the many different ways (some are sketched in Chapters 4
and 5) in
which Kekule could have seen snakes as suggesting ring-molecules. Think of the r
ich
analogy-mapping in Coleridge's mind, which drew on naval memoirs, travellers' ta
les, and
scientific reports to generate the imagery of The Ancient Mariner (Chapter 6). B
ear in mind
the mental complexities (outlined in Chapter 7) of generating an elegant story-l
ine, or
improvising a jazz-melody. And remember the many ways in which random events (th
e mutations
described in Chapter 8, or the serendipities cited in Chapter 9) may be integrat
ed into
pre-existing conceptual spaces with creative effect.
Writing about Coleridge's imagery, Livingston Lowes said: "I am not forgetting b
eauty. It is
because the worth of beauty is transcendent that the subtle ways of the power th
at achieves it
are transcendently worth searching out." His words apply not only to literary st
udies of
creativity, but to scientific enquiry too. A scientific psychology, whether comp
utational or
not, allows us plenty of room to wonder at Mozart, or at our friends' jokes. Psy
chology leaves
poetry in place. Indeed, it adds a new dimension to our awe on encountering crea
tive ideas,
for it helps us to see the richness, and yet the discipline, of the underlying m
ental
processes.
To understand, even to demystify, is not necessarily to denigrate. A scientific
explanation of
creativity shows how extraordinary is the ordinary person's mind. We are, after
all, humans --
not hoverflies.
-==REFERENCES==-

Abelson, R. P. (1973) The structure of belief systems. In: Computer models of th
ought and
language, eds. R. C. Schank & K. M. Colby (pp. 287-340).
Boden, M. A. (1972) Purposive explanation in psychology. Cambridge, Mass.: Harva
rd University
Press.
Boden, M. A. (1988) Computer models of mind: Computational approaches in theoret
ical
psychology. Cambridge: Cambridge University Press.
Boden, M. A. (1990) The creative mind: Myths and mechanisms. London: Weidenfeld
& Nicolson.
(Expanded edn., London: Abacus, 1991.)
Boden, M. A. (in press) What is creativity? In: Dimensions of creativity, ed. M.
A. Boden.
Cambridge, Mass.: MIT Press.
Brannigan, A. (1981) The social basis of scientific discoveries. Cambridge: Camb
ridge
University Press.
Chalmers, D. J., French, R. M., & Hofstadter, D. R. (1991) High-level perception
,
representation, and analogy: A critique of artificial intelligence methodology.
CRCC Technical
Report 49. Center for Research on Concepts and Cognition, Indiana University, Bl
oomington,
Indiana.
Clark, A., & Karmiloff-Smith, A. (in press) The cognizer's innards. Mind and Lan
guage.
Davey, A. (1978) Discourse production: A computer model of some aspects of a spe
aker.
Edinburgh: Edinburgh University Press.
Dyer, M. G. (1983) In-depth understanding: A computer model of integrated proces
sing for
narrative comprehension. Cambridge, Mass.: MIT Press.
Elton, M. (1993) Towards artificial creativity. In: Proceedings of LUTCHI sympos
ium on
creativity and cognition, ed. E. Edmonds (un-numbered). Loughborough: University
of
Loughborough.
Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989) The structure-mapping engi
ne: Algorithm
and examples, AI Journal, 41, 1-63.
Gardner, H. (1983) Frames of mind: The theory of multiple intelligences. London:
Heinemann.
Gelernter, H. L. (1963) Realization of a geometry-theorem provi machine. In: Com
puters and
thought, eds. E. A. Feigenbaum & J. Feldman, pp. 134-152. New Yo}McGraw-Hill.
Haase, K. W. (1986) Discovery systems. Proc. European Conf. on AI, 1, 546-555.
Hadamard, J. (1954) An essay on the psychology of invention in the mathematical
field. New
York: Dover.
Hodgson, P. (1990) Understanding computing, cognition, and creativity. MSc thesi
s, University
of the West of England.
Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (1986) Inductio
n: Processes
of inference, learning, and discovery. Cambridge, Mass.: MIT Press.
Holyoak, K. J., & Thagard, P. R. (1989a) Analogical mapping by constraint satisf
action.
Cognitive Science, 13, 295-356.
Holyoak, K. J., & Thagard, P. R. (1989b) A computational model of analogical pro
blem solving.
In: S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 24
2-266).
Cambridge: Cambridge University Press.
Johnson-Laird, P. N. (1991) Jazz improvisation: A theory at the computational le
vel. In:
Representing musical structure, eds. P. Howell, R. West, & I. Cross. (pp. 291-32
6). London:
Academic Press.
Karmiloff-Smith, A. (1993) Beyond modularity: A developmental perspective on cog
nitive
science. Cambridge, Mass.: MIT Press.
Klein, S., Aeschlimann, J. F., Balsiger, D. F., Converse, S. L., Court, C., Fost
er, M., Lao,
R., Oakley, J. D., & Smith, J. (1973) Automatic novel writing: A status report.
Technical
Report 186. Madison, Wis.: University of Wisconsin Computer Science Dept.
Koestler, A. (1975) The act of creation. London: Picador.
Langley, P., Simon, H. A., Bradshaw, G. L., & Zytkow, J. M. (1987) Scientific di
scovery:
Computational explorations of the creative process. Cambridge, Mass.: MIT Press.

Lenat, D. B. (1983) The role of heuristics in learning by discovery: Three case
studies. In:
Machine learning: An artificial intelligence approach, eds. R. S. Michalski, J.
G. Carbonell,
& T. M. Mitchell (pp. 243-306). Palo Alto, Calif.: Tioga.
Lenat, D. B., & Seely-Brown, J. (1984) Why AM and EURISKO appear to work. AI Jou
rnal, 23,
269-94.
Livingston Lowes, J. (1951) The road to Xanadu: A study in the ways of the imagi
nation.
London: Constable.
Longuet-Higgins, H. C. (1987) Mental processes: Studies in cognitive science. Ca
mbridge,
Mass.: MIT Press.
Longuet-Higgins, H. C. (in preparation) Musical aesthetics. In: Artificial intel
ligence and
the mind: New breakthroughs or dead ends?, eds. M. A. Boden & A. Bundy. London:
Royal Society
& British Academy (to appear).
McCorduck, P. (1991) Aaron's code. San Francisco: W. H. Freeman.
Masterman, M., & McKinnon Wood, R. (1968) Computerized Japanese haiku. In: Cyber
netic
Serendipity, ed. J. Reichardt (pp. 54-5). London: Studio International.
Meehan, J. (1981) TALE-SPIN. In: Inside computer understanding: Five programs pl
us miniatures,
eds. R. C. Schank & C. J. Riesbeck (pp. 197-226). Hillsdale, NJ: Erlbaum.
Michie, D., & Johnston, R. (1984) The creative computer: Machine intelligence an
d human
knowledge. London: Viking.
Michalski, R. S., & Chilausky, R. L. (1980) Learning by being told and learning
from examples:
an experimental comparison of two methods of knowledge acquisition in the contex
t of
developing an expert system for soybean disease diagnosis. International Journal
of Policy
Analysis and Information Systems, 4, 125-61.
Mitchell, M. (1993) Analogy-making as perception. Cambridge, Mass.: MIT Press.
Perkins, D. N. (1981) The mind's best work. Cambridge, Mass.: Harvard University
Press.
Poincare, H. (1982) The foundations of science: Science and hypothesis, The valu
e of science,
Science and method. Washington: University Press of America.
Ritchie, G. D., & Hanna, F. K. (1984) AM: A case study in AI methodology. AI Jou
rnal, 23,
249-63.
Rowe, J., & Partridge, D. (1993) Creativity: A survey of AI approaches. Artifici
al
Intelligence Review, 7, 43-70.
Schaffer, S. (in press) Making up discovery. In: Dimensions of creativity, ed M.
A. Boden.
Cambridge, Mass.: MIT Press.
Searle. J. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences
3, 473-497.
Sims, K. (1991) Artificial evolution for computer graphics. Computer Graphics, 2
5 (no.4), July
1991, 319-328.
Sloman, A. (1986) What sorts of machines can understand the symbols they use? Pr
oceedings of
the Aristotelian Society, Supplementary Volume, 60, 61-80.
Sloman, A. (1987) Motives, mechanisms, and emotions. Cognition and Emotion, 1, 2
17-33.
(Reprinted in: The philosophy of artificial intelligence, ed. M. A. Boden. Oxfor
d: Oxford
University Press, pp. 231-47).
Taylor, C. W. (1988) Various approaches to and definitions of creativity. In: Th
e nature of
creativity: Contemporary psychological perspectives, ed. R. J. Sternberg (pp. 99
-121).
Cambridge: Cambridge University Press.
Thagard, P. R. (1992) Conceptual revolutions. Princeton, NJ: Princeton Universit
y Press.
Todd, S., & Latham, W. (1992) Evolutionary art and computers. London: Academic P
ress.
Turner, S. (1992) MINSTREL: A model of story-telling and creativity. Technical n
ote
UCLA-AI-17-92. Los Angeles: AI Laboratory, University of California at Los Angel
es.

Das könnte Ihnen auch gefallen