Sie sind auf Seite 1von 1

What is this? 1 Can Computers Think?

41 Michael Scriven, 1960, as articulated by Arthur Danto, 1960 42 Arthur Danto, 1960
If a robot can honestly talk about its feelings, it has feelings. The robot's dilemma. Once an advanced robot is built, the way we talk about robots, machines, and
We can determine whether a robot has feelings once we configure it to feelings will either change or will not. This poses a dilemma.

1998
(1) use English the way humans do, (2) distinguish truth from falsehood,

Can computers
(3) answer questions honestly. We then simply ask, "Are you conscious Either Or
of your feelings?" If it says, "yes," then it has feelings. 99 Alan Turing, 1950
English will not change, English will change, Computers are not entirely predictable. The belief that
Are you conscious Yes. computers are entirely predictable arises from the false assumption
of your feelings? in which case in which case (widespread in philosophy and in mathematics) that humans can
know everything that follows deductively from a set of premises.

be creative?
is
we will be forced to English can evolve in 1 of 2 ways. disputed
But humans learn new things in part through the working out of
say the robot is not deductive consequences. Similarly, humans don't know everything

The History and Status of the Debate — Map 1 of 7


by
conscious, because Either Or a computer will do given some initial state of the computer; we learn
is English speakers do new things in part by watching them perform their calculations.
disputed not use "conscious" We simply decide to We construct a special language that applies exclusively to machines, for
by as a predicate for call robots "conscious," example, a language that uses the suffix "-m" to represent the fact that

This info-mural is one of seven An Issue Map™ Publication


is
disputed
by
machines.
in which case

we have an arbitrary
and hence unwarranted
mentalistic terms like "knows" and "conscious" apply to physical events
("knows-m," "conscious-m") in machines,

in which case 98 Anticipated by


Alan Turing, 1950
That's not
surprising
at all ...
100 Alan Turing, 1950
Machines frequently take us
by surprise. Computer users
and even experts are often
surprised by the things that
The machine isn't
creative, but the
human is creatively
surprised. 101 Anticipated by Alan Turing, 1950
Surprise is a result of human

“argumentation maps” in a change in the language. words like "conscious-m" would be used for the robot in the same situations Machines can never computers do. creativity. Even if we are
in which "conscious" would be used for humans. But a lack of knowledge take us by surprise. surprised by what a machine does,
about how human consciousness might correspond to robot consciousness Machines are entirely Wow! What that reaction does not mean that the
is precisely the issue at hand. is supported by predictable in their is a surprise! machine has done anything original
behavior. Because is or creative. It just means that the
43 Paul Weiss, 1960 disputed
they never do anything disputed human made a creative prediction

series that explores Turing’s


In Either Case by

Can computers
Machines cannot love or be loved. Machines, new, they can never by about what the computer would do,
which are mere collections of parts, cannot love or be No means is provided to tell whether a robot is conscious. At best the question is pushed back. surprise us. and was then surprised when the
loved. Only unified wholes that govern their parts, such
Start Here is
disputed
as humans, have the capacity to love what is lovable or be
loved by those who love. Machines fail on both counts, so In Either Case
computer acted differently.

question: “Can computers think by they are subhuman and lack minds.

have emotions?
Simply asking the machine if it has conscious feeling will not help us determine if it does. is supported by
is
The analytical engine has no pretensions
disputed
to originate anything. ... It can follow
by
analysis; but it has no power of

and/or will they ever be able


is 55 Aaron Sloman and Monica Croucher, 1981 anticipating any analytical relations or
Emotions are the product of motivational
1 Alan Turing, 1950 I believe that at the end
of the century ... one
disputed
by representations. Emotions result from interactions
between motives and other cognitive states. Motives is supported by
56 Aaron Sloman, 1987
Hierarchical theory of
truths.
It can do more
104 Alan Turing, 1950
The analytical engine may have
are representations of states of the world to be achieved,
Yes, machines can will be able to speak of 30 J. J. C. Smart, 1964 Rocks: nonliving and no feelings. 44 Margaret Boden, 1977 prevented, and so forth. A robot with the proper affects. Emotional states
103 Countess of Lovelace, 1842
than you realize, been able to think for itself. Ada
Ada. 102 Alan Turing, 1950

to?” Argumentation mapping is Check! arise from hierarchically Lovelace was justified in denying that
Having feelings does not logically Emotions are cognitive schemata. motivational processes will have emotions.
structured dispositional states, The analytical engine can The argument from human
the analytical engine could be
machines thinking Salt: nonliving and no feelings. What is essential to emotions is the schema never do anything original. creativity applies to any
(or will be able to) without expecting to be is 28
is
disputed
by
imply being a living organism.
Although we haven't yet come across any
nonliving entities with feelings, perhaps
Check!
Staplers ... check!
Water ... check!
of cognitive evaluation that determines the
relationship between the emotion and the
that is, tendencies to behave
in certain ways given certain
circumstances. Higher-level
The analytical engine (see sidebar,
"The Analytical Engine") could
is
disputed
by
creative, because she had no evidence
that it was creative. But because the
analytical engine was in fact a
case of surprise. You could
always say that being surprised
rest of the cognitive states of the subject.
think. A computational contradicted. disputed Machines in the future we will. There is no logical dispositions influence lower- never discover any new facts. It is came from you, the interpreter,

a method that provides: contradiction in the idea of a nonliving Well, no nonliving things with In order for machines to have emotions, limited to drawing out consequences universal digital computer, it may rather than from anything
by can't have
emotions. can't being that has feelings. feelings ... yet. they must model the complex interactions
level dispositions, which in
of facts that it has been provided PHOTO # 32 have had far greater capabilities than original on the other person's or
turn influence external Refer to photocopy for she realized. With added speed and
involved in the use of such concepts as with. In contemporary terms, a machine's part. For example, if
system can possess all Machines can experience
never be in
emotional states 29 Paul Ziff, 1959
31 J. J. C. Smart, 1964
is
disputed
by
pride, shame, and so forth. Furthermore,
these concepts must be (partially) is
57 Geoff Simons, 1985
Emotion is a type of
information processing.
behavior.
computer can only do what it has
been programmed to do.
precise cropping
instructions.
Countess of Lovelace
storage capacity the analytical engine
may have been able to think for itself.
a human surprises you with a
joke, then you could argue that

- a method for portraying major


The concept of feeling responsible for the behavior of the system. the surprise was a result of your
important elements of (they can never
be angry,
joyous, fearful, is supported by
only applies to living
organisms. Because robots is
disputed
We can imagine artifacts that have feelings.
Several cases show that artifacts could have feelings.
(1) If the biblical account of creation in Genesis were
Margaret Boden disputed
by
Once we understand the
biochemical and cybernetic
aspects of human emotion, we
interpretation of the joke rather
than anything creative on the
45 Daniel Dennett, 1978 joke teller's part.
human thinking or etc.). Emotions
are necessary for
thought.
are mechanistic artifacts, not
organisms, they cannot have
feelings.
by true, then humans would be both living creatures and
artifacts created by God. (2) We could imagine
Our intuitions about pain are incoherent. At present, it's easy to
criticize the possibility of robot pain, but only because our everyday
will be able to build computers
with emotions.
The Analytical Engine

philosophical, political, and understanding. Therefore,


computers can't
think. is
disputed
by
self-replicating mechanisms whose offspring would
manifest small random alterations, allowing them to
evolve. Such mechanisms might be considered
living and at the same time artifacts.
They are
understanding of pain is incoherent and self-contradictory. For example,
morphine is sometimes described as preventing the generation of pain, and
sometimes as just blocking pain that already exists. But those are
inconsistent descriptions. Once we have a coherent theory of pain, a robot
could in principle be constructed to instantiate that theory and thereby feel 58 Geoff Simons, 1985 is
97
Computers can never be creative. Computers only do
Invented by Charles Babbage circa 1860,
the analytical engine was a mechanical
computer composed of gears, cranks, and
wheels, which could be programmed by
punch cards. In principle, Babbage's

pragmatic debates
artifacts of pain. The Turing test provides I really do disputed what they are programmed to do; they have no originality analytical engine could carry out any of the
Alan Turing evidence for emotions as well

Corbis-Bettman
God as well Once we clear up is feel bad. by or creative powers. calculations a modern electronic computer
as living No! Morphine just as for intelligence. Because Note: Similar debates play out in the "Free Will" arguments. can, but due to construction and design
Morphine prevents these confusions, disputed
behavior is an important part of
creatures. is the generation of blocks the pain that we can implement by costs the analytical engine was never built
disputed already exists. determining whether a system has Yes, that's a during Babbage's lifetime (several have
pain. a theory of pain on emotions, the Turing test is useful person!

- a summary of an ongoing, 32 Hilary Putnam, 1964


by a computer. been constructed since).
as a test for emotional capacities as
"Alive" is not definitionally based on structure. well as for general intelligence. If a Charles Babbage
Because the definition of "alive" is not based on structure, it robot can pass the Turing test and if
allows for nonhuman robot physiologies. Robots made up of it has a cognitively plausible
cogs and transistors instead of neurons and blood vessels might internal structure, then it can have 106 Douglas Hofstadter, 1995

major philosophical debate of


have feelings because they might actually be alive. emotions. The ELIZA effect. The ELIZA effect is a
Note: Also, see Map 2. 105
Computers have tendency to read more into computer performance
That's just an already been than is warranted by their underlying code. For
33 Georges Rey, 1980 is is example, the computerized psychotherapy
Machines lack the physiological components of emotion. evolutionary disputed creative. Computer disputed
is supported by remnant. It's Implemented Model by models that exhibit program ELIZA (see "ELIZA," Map 2, Box 34)
Machines lack the human physiology that is essential to emotions, by

the 20th century


is 34 Aaron Sloman, 1987 not essential creativity or at least gives apparently sympathetic responses to human
for example, the ability to secrete hormones and neuroregulators. 47 concerns, but in fact is only utilizing a set of
Because machines can't reproduce such a physiology through disputed Physiology is not essential to emotion. some component of
by to emotion. Human emotion can BORIS. BORIS is a narrative reader designed to understand creativity have already canned responses.
abstract computational processes, they can't possess emotions. is descriptions of the emotional states of narrative characters. Note: The ELIZA effect was recognized and

Can computers have free will?


be implemented on a computer been developed.
because the relevant features can
disputed 46 Michael Dyer, 1987 BORIS can predict the emotional responses of characters and described by ELIZA's creator, Joseph
35 Joseph F. Rychlak, 1991 is supported by
by Emotions can be modeled by interpret those responses by tracing them back to their probable Weizenbaum, though he didn't give it that title.

- a new way of doing intellec- Machines can't think dialectically, and be modeled (the emotion's Douglas Hofstadter
interaction with cognitive states, describing their relations to other causes.
dialectical thinking is necessary for cognitive states. Modeling emotions
emotions. Emotions are experienced in motivations, etc.). The
physiological aspects of emotion involves two tasks: (1) the semantic task of Implemented Model
complicated dialectical circumstances, which programming a system to understand Implemented Model
is supported by require the ability to make judgments about (which include biochemistry,
3 4 Ninian Smart, 1964 behavior, and proprioception) are emotions, and (2) the functional/behavioral 48 108 Philip Johnson-Laird, 1988a

tual history.
Humans also lack free Humans are programmed. If you accept determinism, then you others and gauge oppositions. Machines can't task of programming a system to behave
reason in that way, so machines can't experience evolutionary remnants; they are OpEd. OpEd is an editorial reader that deals with The jazz generator. The jazz
will. Whether or not is supported by accept that nature has programmed you to behave in certain ways in not essential. emotionally through the interaction of is supported by nonnarrative editorials—for example, critical book is supported by
computers have free will is emotions. emotional states and other cognitive states, generator produces chord
certain contexts, even though that programming is subtler than the reviews. The program tracks the beliefs of the writer sequences and uses them to
is irrelevant to the issue of programming a computer receives. Supported by such as planning, learning, and recall.
disputed "Symbol Systems Cannot Think Dialectically," as well as the beliefs the writer ascribes to his or her improvise chords, bass-line
whether machines can critics. Unlike BORIS, OpEd is able to deal with
by think. People can think, Map 3, Box 25. melodies, and rhythms.
Joseph Rychlak nonnarrative texts, in which "the writer explicitly

What does it
and they don't have free 50 Aaron Sloman, 1987 supports one set of beliefs while attacking another."
Emotions are the solution to a design Implemented Model
will. People are just 5 [H]uman beings are slaves of brute matter,
as deterministic as Free will is an illusion According to the modern scientific view, compelled to act in particular ways by virtue is supported by
36 37 Geoffrey Jefferson, 1949 is problem. Emotions (both in organic creatures and
Emotions are necessary for thought. in artificial creations) are the solution to a design 107 H. Gelernter, 1963
machines are. So machines of experience. We may there is simply no room at all for "freedom of biochemical and neuronal factors. What Emotional experience is necessary for thought. The only entities disputed The geometry program. The geometry
may yet be able to think. think we are free, but that of the human will" (1986, p. 306). we see is the illusory nature of free will Only systems that can be in emotional is supported by that can possess human abilities are entities that can act on the basis of felt by problem—how to cope intelligently with a rapidly
states can be said to think. changing environment, given established goals and program is a system that works backward
is just an illusion of (1985, p. 109). emotions. No mechanism can feel anything. Therefore, machines can't from geometric theorems, searching for Implemented Model
experience. Actually, we possess human abilities, in particular, the ability to think. limited processing resources. In both humans and
machines the problem is solved with intelligent is supported by their proofs by means–end analysis. This 110 Jim Meehan, 1975
are determined to do what Note: Also, see "Mechanisms Can't Possess Human Consciousness," Map 6, Box 10. planning breaks down the problems using a ... George Ant was very
is supported by we do by our underlying computational strategies. TALE-SPIN. This program writes thirsty. George wanted to
Implemented Model hierarchy of goals and subgoals. To avoid
neural machinery. is supported by stories with characters that have get some water. George
49 impossible searches the program uses
is supported by goals and subgoals dependent on walked from his patch of
38 David Gelernter, 1994 51 Nico Frijda and Jaap Swagerman, 1987 DAYDREAMER. DAYDREAMER is heuristics to select the most promising their motivations. Its characters
Computers must be capable of emotional association to think. In order to Emotions are manifestations of concern search paths. ground across the meadow
a stream of thought generator that cooperate in each other's plans and through the valley to a river
Geoff Simons is supported by think, a computer must be capable of a full spectrum of thought. Computers may be realization. Emotional states result from a "concern specifies how representations of can form competitive relationships

contain?
Marvin Minsky capable of high-end thinking, which is focused, analytic, and goal-oriented. But in order is bank ...
realization system" that matches internal representations emotional states affect other forms of Implemented Model when necessary to achieve their
to think as humans do they must also be capable of low-end thinking, which is diffuse, disputed
against actual circumstances in order to cope with an cognitive processing. It does this by goals. The program can also
analogical, and associative. For example, a flower and a flowered dress might be associated by
uncertain environment. Computers that implement the concocting "daydreams" of possible 109 Margaret Masterman, 1971 represent a wide range of
6 Philip Johnson-Laird, 1988a in low-end thought by a diffuse set of emotionally charged linkages. concern realization system go through emotional states. outcomes and reactions and then using Haiku program. A program has communications between its
Free will results from a multilevel representational structure. those daydreams to represent the stream been written that develops haiku All white in the buds
Options for action I flash snow peaks characters.
A multilevel representational structure is capable of producing free of consciousness of the system. (a style of Japanese poetry)
will. The system must have levels for: through interaction with humans. in the spring
52 Andrew Ortony, G. Clure, and A. Collins, 1988
• representing options for action (e.g., go to dinner, read, take a walk); 39 Tom Stonier, 1992
Emotions are cognitive evaluations. The model provides poets with Bang the sun has
• representing the grounds for deciding which option to take (e.g., Decision-making process
Emotional machines need limbic systems.
Emotions are determined by the structure, content,
is supported by synonym lists to aid in word fogged. Implemented Model
is choose the one that makes me happy, choose by flipping a coin); Emotional machines need the machine equivalent of the 53 Michael Arbib, 1992 choice and also constrains line The computer recognizes that
is and organization of knowledge representations and
disputed • representing a method for deciding which decision-making process human limbic system. The limbic system subserves
+ Emotions color perception and action. length to ensure that the haiku is 112 Margaret Boden, 1990
letter A without having been
by to follow (e.g., follow the most "rational" method, follow the fastest emotional states, fosters drives, and motivates behavior. disputed the processes that operate on them. A machine ! Cognitive appraisal, in the form of knowledge properly formed. The haiku Connectionist systems
programmed to do so.
method).
Computers that have been programmed with such multilevel structures
Grounds for choosing a
decision-making process is supported by
It is also responsible for the pleasure-pain principle,
which guides the activities of all higher animals.
Through the development of artificial limbic systems,
by equipped with the correct knowledge-handling
mechanisms, which result in appropriate behavior,
will have emotions.
is
disputed
by
#
@
* !! #
@
representation plus appropriate behavior, is
not enough to convert bare information
processing into emotion. Such a theory does
program can run without human
interaction by making arbitrary
is supported by exhibit creativity.
Connectionist networks can
Philip Johnson-Laird can exhibit free will. choices from its synonym lists. learn to recognize patterns
emotional machines will be attainable in 20–50 years. is supported by not account for the fact that emotions can without being specifically
color one's perceptions and actions. For Implemented Model programmed to do so.
40 Hans Moravec, 1988 54 Philip Johnson-Laird, 1988a example, the perception of a winning Note: Also, see Map 4.
7 Geoff Simons, 1985 Artificial minds should mimic animal is Feelings are information signals in a cognitive system. Feelings are needs and emotions, touchdown in a football game could be 111 Harold Cohen, B. Cohen,
8 Geoff Simons, 1985 which correspond to information signals of two kinds: (1) needs, which arise from lower-level computationally modeled as knowledge
Free will is a decision-making process. evolution. The fastest progress in AI research disputed and P. Nii, 1984
Conditional jumps constitute free will. The ability of a system to perform conditional jumps when distributed processors that monitor certain internal aspects of the body; (2) emotions, which also representation plus appropriate
Free will is a decision-making process characterized can be made by imitating the capabilities of by AARON. AARON produces
is confronted with changing information gives it the potential to make free decisions. For example, a computer animals, starting near the bottom of the arise from lower-level distributed processors but originate as cognitive interpretations of external behavior. But this doesn't account for the
by selection of options, discrimination between is supported by
may or may not "jump" when it interprets the instruction "proceed to address 9739 if the contents of register is supported by
visual art by selecting a
disputed clusters of data, and choice between alternatives. phylogenetic scale and working upward toward events, especially social events. A robot could have feelings if its computational structure implemented differently colored perceptions of fans of random starting point on a
by A are less than 10." The decision making that results from this ability frees the machine from being a mere those 2 kinds of signals. opposing teams.

Altogether the seven maps:


Because computers already make such choices, they animals with more complex nervous systems. canvas and then drawing lines
puppet of the programmer.
possess free will. from that point using a
complex set of if-then rules.

10 Jack Copeland, 1993 12 A. J. Ayer, 1954 Implemented Model


Free will arises from random selection of alternatives in nil

- summarize over 800 major


Free will is necessary for moral

Should we pretend computers


preference situations. When an otherwise deterministic system responsibility. Randomness and
makes a random choice in a nil preference situation, that system moral responsibility are incompatible. We cannot be 113 Sheldon Klein, 1975
exhibits free will. A nil preference situation is one in which an agent responsible for what happens randomly any more than we Book generator. This ... Lady Buxley was
must choose between a variety of equally preferred alternatives (for can be responsible for what is predetermined. Because any automatic novel writer near James. James
example, whether to eat one orange or another from a bag of equally adequate account of moral responsibility should be grounded generates 2,100-word caressed Lady

moves in the debates threaded


is supported by good oranges). The available alternatives may have arisen from Buxley with passion.
is supported by in the notion of free will, randomness cannot adequately mysteries. It develops a is disputed

will never be able to think? Is the brain a computer?


deterministic factors, but "when the dice roll," the choice is made characterize free will. rudimentary plot based James was Lady by 114 Margaret Boden, 1977
freely. is supported by on the conflicting Buxley's lover ... The book generator is inadequate. The
motivations of its is book-writing program's fiction is inadequate for the
13 Jack Copeland, 1993 characters and fits the following reasons. (1) The stories are shapeless and
disputed

into claims, rebuttals, and


Random choice and responsibility model of a mystery story rambling. (2) The specific motivational patterns are
are compatible. An agent that chooses by
11 by revealing the relatively crude and unstructured. (3) The
is 9 Alan Turing, 1951 randomly in a nil preference situation (one murderer at the end. identification of the murderer comes as a statement
Machines can exhibit free Randomization sacrifices in which all choices are equally preferred) 59 Anticipated by Alan Turing, 1950 60 Alan Turing, 1950
disputed responsibility. Machines rather than as a discovery.
by will by way of random is is still responsible for its actions. A gunman The heads-in-the-sand objection. The transmigration consolation.
selection. Free will can be disputed
that make decisions based on can randomly choose to kill 1 of 5 hostages. The consequences of machine thought are too dreadful to The heads-in-the-sand objection is

counterrebuttals
random choices have no is
produced in a machine that by He chooses at random, but he is still accept. We should "stick our heads in the sand" and hope that too trivial to deserve a response; 79 John Searle, 1992 80 Jack Copeland, 1993
responsibility for their actions, is disputed is
generates random values, for responsible for killing the person whom he machines will never be able to think or have souls. consolation is more appropriate. It Nothing is intrinsically Programs are not
because it is then a matter of disputed by disputed
example, by sampling random picks, because he was responsible for taking may be comforting to believe that a digital computer. The universally realizable.
chance that they act one way by by
noise. the people hostage in the first place. souls are passed from humans to syntactic structures that Even if it is true that during
rather than another. Because Random choice only revokes responsibility machines when humans die by the define computers are not some interval of time a pattern

- 97-130 arguments and rebuttals


responsibility is necessary for if the choice is between alternatives of theological doctrine of the intrinsic to physics; they are

Can computers
free will, such machines lack of molecule movements on the 81 John Searle, 1992
differing ethical value. transmigration of souls. ascribed to physical systems wall is isomorphic with, for Universal
free will. by humans. So the That wall doesn't 117 Harry Collins, 1994
16 Jack Copeland, 1993 That wall can example, the formal pattern of realizability is not 116 Harry Collins, 1994
Being a deterministic machine is question, "Is the brain a support the same the WordStar computer essential to the Scientific reasoning The socialization test. The importance of
digital computer?" is be interpreted counterfactuals socialization is demonstrated by the "socialization
compatible with having free will. as a digital program, the wall will not argument. requires social

per map
Humans and computers are both ill-defined, because syntax as WordStar. support the same counterfactuals agreement. Computers test," a variant of the Turing test. In the

Does God prohibit


is can be ascribed to any computer. Even without
socialization test, a human control and a machine

reason
deterministic systems, but this is is as the program. If the WordStar is cannot reason
is disputed 14 sufficiently complex universal realizability, are both given a passage of "mucked-up" English.
compatible with their being free. disputed program had been given different disputed scientifically because
disputed by The helplessness system. Syntactic it is still true that Both the machine and the human control must
Actions caused by an agent's beliefs, by input, it would have behaved by they are not members of
by argument. When agents structures are not just syntax is observer is supported by correct all the errors and transliterate the passage
desires, inclinations, and so forth are (human or machine) make differently. But the wall, which relative. And this is society. Scientific laws
free, because if those factors had been multiply realizable in was not engineered to implement into normal English. If a judge cannot tell which

- 70 issue areas in the 7 maps


choices at random, they lack enough to show that and data do not follow
different, the agent might have acted numerous physical systems, WordStar, would not respond to from the application of an text was error-corrected by machine and which

computers from thinking?


free will, because their they are universally nothing, including the by the human control subject, then the machine

scientifically?
differently. choices are then beyond different "input" (that is, a brain, is intrinsically a algorithm, but are
realizable in any physical different pattern of molecular developed through a passes this test for socialization.
their control. As J. A. system. digital computer. Note: For more on the Turing test, see Map 2.
Shaffer (1968) puts it, the organization) in the same way. quasipolitical process of
agent is "at the helpless is So WordStar is not universally negotiation. Harry Collins

- 32 sidebars history and further is


disputed
by
2
Computers can't have
free will. Machines only
do what they have been
mercy of these eruptions
within him which control
his behavior."
is
disputed
by
15 Jack Copeland, 1993
The Turing randomizer is only a tiebreaker. The
helplessness argument is misleading, because it implies that
random processes control all decision making—for
61 Anticipated by
Alan Turing, 1950 62 Alan Turing, 1950
disputed
by

is supported by
realizable.

coun •ter • fac • tu •al: A conditional (if-then)


statement whose "if" clause runs counter to the facts
is supported by

118 Carl Hempel, 1985


Computers can't introduce new
I throwed
trash the
wastebasket.
I threw the
trash in the
wastebasket.

background
designed or programmed to example, the decision of whether to wait at the curb or The theological objection. The theological objection is terms or explanatory principles. It's vocabulary is
do. They lack free will, but free will: The ability to make is You have no ungrounded. The view that only 82 of reality. For example, the statement, "if pigs had
jump out in front of an oncoming truck. All the Turing Only entities with immortal soul! wings then they would fly," is a counterfactual, A computer cannot be original because fixed! It can't come
free will is necessary for voluntary, unconstrained decisions. disputed souls can think. God has given
is humans have souls is as ungrounded and Formal programs can be realized in up with any new
randomizer does is determine what a machine will do in disputed multiple physical media. The same formal because the "if" clause—that pigs have wings—is it cannot introduce new theoretical
thought. Therefore, Freely made decisions are independent those situations in which options are equally preferred. by souls to humans, but not arbitrary as the view that men have souls terms or principles. Computers' terms or principles ... 119 Richard Scheines, 1988
computers can't think. of the influence of such deterministic by but women don't. For all we know, in program could be realized in a digital computer, false.
to machines. Therefore, "discoveries" are limited to those that Computers can introduce
factors as genetics (nature) and humans can think, and creating thinking machines we may be in a human brain, in beer cans and toilet paper, new terms. Computers can
or in any number of physical implementations. can be expressed using the program's
conditioning (nurture). computers can't. serving God's ends by providing fixed vocabulary and conceptual introduce new terms using
dwellings for souls he creates. The program is defined solely in terms of its is automated principles of
18 Geoff Simons, 1985. formal syntactic structure; its mode of physical apparatus. Human discovery, by disputed
78
is supported by contrast, involves the introduction of explanatory adequacy. This
Some computers can program themselves. Automatic implementation is irrelevant. by has been shown using a
programming systems (APs) write computer programs by following The biological assumption. The brain is a machine that can Note: For more multiple realizability arguments, new terms and principles that cannot be
think. Its neurobiological processes are similar to or identical with defined in terms of those previously program that uses explanatory

The argumentation maps:


some of the same heuristics that human programmers use. They see the "Can functional states generate Beer adequacy principles to
is supported by specify the task that the program is to perform, choose a language to the information processes of a computer. consciousness?" arguments on Map 6 and sidebar, available.
17 is
Note: More specific versions of the biological assumption introduce new terms in the
write the program in, articulate the problem area the program will be

Can computers
Computers only exhibit the free disputed "Formal Systems: An Overview," on Map 7. domain of "causal models"—a
applied to, and make use of information about various programming argument are represented on Map 3 and on Map 5.
will of their programmers. by class of mathematical theories
strategies. Programs written by such APs are not written by humans, 120 Carl Hempel, 1985 popular in social science.
Computers can't have free will and so computers that run those programs do not just mirror the free Computers can't adequately evaluate

- arrange debate so that the cur-


because they cannot act except as they will of humans. 115 hypotheses. A computer model of
are determined to by their designers
and programmers. That's an 85 Keith Stanovich, 1990
Computers
can't reason
? scientific discovery would have to use a
criterion of preference to choose between

understand arithmetic?
explanation by And then a miracle happens ... scientifically. hypotheses that account for available data
is is supported by
is is miracle! Penrose does not explain how Computers are unable to equally well. But criteria of preference tend
disputed
de • ter • min • ism: The belief that

rent stopping point of each


22 Hilary Putnam, similar to 84 Roger Penrose, 1990 disputed quantum effects in the brain might by think and reason as human to be imprecise and idiosyncratic, so it is
20 Ninian Smart, 1964 all actions and events are determined 1964 Low-level quantum by affect consciousness. He simply scientists do. unlikely that such a criterion could be
is supported by is supported by
Preprogrammed humans have psychological by the influences of nature and history. The robot learning effects are I'm not assumes that quantum effects and implemented on a computer.
states. If determinism is true, then humans are programmed Human actions result from strict causal response. A robot uncomputable. The computable. the brain are miraculously related.
by nature and yet have psychological states. Thus, if laws that describe the brain and its could be programmed 1+1? What biological phenomena that
is determinism is true, we have a counterexample to the claim relation to the world. Free will is an to produce new 63 Stanley L. Jaki, 1969; Fred Dretske, 1990 does that 64 William Rapaport, underlie consciousness 123 Harry Collins, 1994

debate thread is easily seen disputed


by
that preprogrammed entities can't have psychological states.
Supported by
"Humans Are Programmed," Box 4.
illusion. behaviors by learning
in the same way
humans do. For
example, a program
that learned to tell
is
disputed
by
Computers can't add, much less
think. Machines only operate on uninterpreted
symbols. Even when they perform the operations
corresponding to addition, they are merely
shuffling symbols that are meaningless to them. 1+1=2
mean?

is
disputed
1988
Computers can learn
to add. Computers that
possess internal semantic
networks can learn
is supported by
83
The operation of the brain is
computable. Once we have a
sufficient understanding of the laws
is
disputed
operate at a level at which
quantum effects could exert
an influence. Because
quantum effects are not
computable, the brain and
is
disputed
by
86 Herbert Simon, 1995
Quantum effects are irrelevant to symbolic processes. is
121
Computers have already
reasoned scientifically.
Implemented Model
122 Pat Langley, Hubert Simon, Gary Bradshaw,
and Jan Zytkow, 1987
BACON. A program for discovering laws from
BACON only works when
humans filter its data. Bacon
only works through its interaction
with scientists who filter its data
and thereby predetermine its

- identify original arguments by


19 Paul Ziff, 1959 These manipulations become mathematics only by of physics and the structure of the consciousness may be Quantum uncertainties are unimportant to the study of symbolic disputed data by applying heuristics, BACON has discovered
Preprogrammed robots can't new jokes would not dialectically in the same by by Computer systems exist that is results. If humans did not
simply be repeating when humans interpret them. way that humans do. brain, we will be able to precisely noncomputational and thought processes, because they occur at a low level of Kepler's law of planetary motion, Galileo's law of disputed constrain its data, it is doubtful that
have psychological states. 21 Paul Ziff, 1959 Note: An earlier version of this claim was made have reasoned as scientists is supported by
uniform acceleration, and Ohm's law of electrical
The record player argument. A robot "plays" its is jokes the programmer Thus, while they do not simulate the operation of the brain nonalgorithmic. organization and are averaged out before they can affect by BACON would produce any
Because they are programmed, by Ludwig Wittgenstein in the 1930s and with a computer. higher-level processes. do, proposing explanatory resistance.
robots have no psychological states behavior in the same way that a phonograph plays a disputed had entered into its intrinsically know how to hypotheses and choosing original science.
memory, but would published in Remarks on the Foundations of add, they can learn. Note: The history of BACON program is compex Supported by
of their own. They may act as if is supported by record. It is just programmed to behave in certain by Mathematics (1956). among them. and extends back into the 1960s.

over 380 protagonists world- they have psychological states, but ways. For example, "When we laugh at the joke of a be inventing jokes in "The Front-End Assumption Is
only because their programmers robot, we are really appreciating the wit of a human the same way humans Dubious," Box 75.
have psychological states and have programmer, and not the wit of the robot" (Putnam, do. Implemented Model
programmed the robots to act 1964, p. 679). 65 Fred Dretske, 1990
accordingly. The marijuana-sniffing dog. 124 B. G. Buchanan, D. H. Smith, W. C. White,

wide over 40 years


is supported by Computers can't have an adding thought is supported by R. Gritter, E. A. Feigenbaum, J. Ledergerg,
(much less have a more complex thought) sniff and C. Djerassi, 1976
23 Paul Ziff, 1959 24 Hilary Putnam, 1964 because the symbols being added don't DENDRAL. DENDRAL is an expert system that
The reprogramming argument. Humans can't Reprogramming is consistent with free will. The reprogramming argument fails to have any meaning to the computer, and analyzes and identifies chemical compounds by
is supported by be reprogrammed in the arbitrary way that robots show that robots lack free will for the following reasons. they don't have any meaning because they forming and testing hypotheses from experimental
• Humans can be reprogrammed without affecting their free will. For example, a criminal
don't play a causal role based on that meaning.

Are computers
can be. For instance, a robot can be programmed to data. Meta-DENDRAL, a component of DENDRAL,

- make the current frontier of act tired no matter what its physical state is, A trained dog, for example, will has discovered how to synthesize previously unknown
is might be reprogrammed into a good citizen via a brain operation, but he could still make free wag its tail when it smells marijuana, but
whereas a human normally becomes tired only after disputed decisions (perhaps, for example, deciding to become a criminal once again). chemical compounds as well as entirely new rules
(like a robot) it's only responding because
some kind of exertion. The actions of the robot by • Robots cannot always be arbitrarily reprogrammed in the way that the reprogramming it's been trained to do so, not because the of chemical analysis. It even has a publication to its
depend entirely on the whims of the programmer, argument suggests. For instance, if a robot is psychologically isomorphic to a human, it meaning of the smell causes it to wag its tail. credit.
whereas human behavior is self-determined. cannot be arbitrarily reprogrammed.
• Even if robots can be arbitrarily reprogrammed, this does not exclude them from having

debate easily identifiable 25 L. Jonathan Cohen, 1955


Computers do not choose
Put it over there.
free will. Such a robot may still produce spontaneous and unpredictable behavior.

inherently disabled? 88 Alan Turing, 1950


Disability arguments derive from
our limited experience with

Can computers draw


machines. Because the machines we've

- provide summaries of eleven


is supported by He has no mind of his That robot's been

Can computers be persons?


their own rules. We refer to seen are clunky, ugly, mechanical, and
people as "having no mind of their own right now. He's reprogrammed but
acting like a computer. it still acts spontaneously so forth, we assume that a machine could
own" when they only follow the never fall in love or enjoy strawberries
rules or commands of others. and unpredictably ...
87 Anticipated by Alan Turing, is and cream. But these are just bad
Computers are in a similar disputed inductions from a limited base of
1950

major philosophical camps of


situation. They are programmed

analogies?
The argument from by experience.
with rules and follow commands
without conscious choice. disabilities. Machines can never
Therefore, computers lack free do X, where X is any of a variety
will. of abilities that are regarded as
89 Anticipated by Alan Turing, 1950
is distinctly human, for example, You can't
Computers can't enjoy strawberries 90 Alan Turing, 1950 126 John Pollock, 1989 Personhood: Historical Background

the protagonists (or schools of


disputed being friendly, having a sense of do X. is
by humor, making mistakes, enjoying and cream. Computers will never possess Computers may be made to enjoy is An artificial person can be built. An Many contemporary and historical debates have dealt with the concept of personhood.
26 Joseph Rychlak, 1991 is supported by the human ability to enjoy strawberries and is strawberries and cream. Computers disputed disputed artificial person can be built from physical The abortion debate deals with the status of the fetus as a person. Animal rights
is supported by Computers can't do otherwise. An agent’s actions are free if the agent can do otherwise strawberries and cream, or by
thinking about oneself. cream. disputed might be made that will enjoy strawberries by ingredients provided it adequately models human theorists ask whether various species of animals are persons or not. The emancipation
than perform them. This means that an agent is free only if it can change its goals. But only
dialectical reasoning allows an agent to change its goals and thereby act freely. Because machines
? 69 David Chalmers, Robert French, Note: A great deal of the debate by and cream, but the only importance of this
would be to illuminate other issues, such as
125
Computers can't be persons. Machines can
rationality, which is the suitable structure
necessary for personhood.
of the slaves was won when the Supreme Court was convinced that African Americans
were people and not property.
represented on these maps are

thought).
are not capable of that kind of thinking, they are not free. 66 and Douglas Hofstadter, 1995 the possibility of friendship between man never be persons. They lack ethical status and
Note: Also, see the "Can physical symbol systems think dialectically?" arguments on Map 3. Computers can't understand analogies. Computers SME only draws analogies from prestructured forms of disability arguments— cannot bear responsibility for their actions. At best
arguments that machines can't be and machine. The question of whether robots are persons has been asked since at least the release
cannot understand analogical comparisons or metaphors. For representations. SME creates analogies using high- they can display personlike behavior.

How do I get a
is level representations that are structured with those specific creative, can't use analogies, can't Note: Many other arguments about computers of Karel Capek's play R.U.R. (Rossum's Universal Robots) in the 1920s. This play—
example, a machine could not understand the sentence, "She from which the name "robot" derives—is about the struggle of intelligent robots to
disputed ran the like the wind." analogies in mind. Its behavior provides no evidence of be conscious, and so forth—and being persons permeate the maps but have been 127 Selmer Bringsjord, 1992
by is
intelligence because the analogies it discovers are already so could also be thought of as placed in other regions to emphasize what specific gain their civil liberties.
Note: Analogy arguments are also discussed by George disputed Robots can do intelligent things but will never be
27 Selmer Bringsjord, 1992 Lakoff in the "Symbolic Data" arguments on Map 3. built into the data it works with. supports for this claim. aspect of machinehood or personhood is in question.
by persons. AI will eventually succeed in building robots that In the debate over artificial intelligence, personhood again becomes an issue, because
Free will yields an infinitude that finite machines can't reproduce. Unlike deterministic Supported by 92 Alan Turing, 1950 can behave intelligently but will never make robots that are
is supported by machines (e.g., Turing machines), persons can be in an infinite number of states in a finite period "The Front-End Assumption Is Dubious," Box 74. if computers are able to think, then their ethical status may have to be upgraded.
Computers can make certain kinds of mistakes. Those actually persons. Persons are genuine things (rather than Moreover, many artificial intelligence researchers hold the dream of creating artificial
of time. That infinite capacity allows persons to make decisions that machines could never make. who think computers can't make mistakes confuse errors of logical constructions) that bear psychological properties and
Note: Bringsjord's argument is fleshed out in the "Can automata think?" arguments on Map 7. life in the form of an artificial person, in part because the concept of intelligence is
functioning (errors that result from the physical construction of the that can bring about states of affairs in the world. closely related to the concept of personhood. Some think that a thinking computer
Also, see the "Can computers be persons?" arguments on this map. 70 David Chalmers, Robert French, machine) with errors of conclusion (errors that result from the
and Douglas Hofstadter, 1995 91 Anticipated by Alan Turing, Note: Bringsjord supports his claim with a wide range of would be straight-off a person, because we know that thinking is (to some degree)
1950 machine's reasoning process). It is true that machines can't commit is supported by arguments that are dispersed throughout the maps. See the "Can
Objects, attributes, and relations are too part of being a person. Some think that a robot cannot think unless it is a genuine
is Computers can't make errors of functioning if they are properly constructed. But machines computers have free will?" arguments on this map, and the "Can person, because otherwise there would be no "one" doing any thinking.
rigidly distinguished by SME. In order for can commit errors of conclusion, for example, by making faulty
disputed its analogical mappings to work, SME assumes a mistakes. Computers differ I can't go automata think?" arguments on Map 7.

printed copy?
by Implemented Model from humans in that humans wrong. inferences based on a lack of adequate information.
rigid distinction between objects, attributes, and
relations. But it is unclear whether humans can make mistakes, whereas
68 Brian Falkenhaimer, K. Forbus, and D. Gentner, 1990 computers can't. They are
SME. SME is a structure-mapping engine that discovers analogies is make such a rigid distinction. For example, we is supported by
sometimes conceptualize wealth as an object that easily unmasked in the Turing is All ravens
Legend between domains by a set of match rules. The analogies that result
are judged according to the criteria of clarity, richness, abstractness,
and systematicity. SME has found mappings between heat and water
disputed
by flows between people, but at other times we
conceptualize wealth as an attribute that changes
test, because humans would
frequently make mistakes in
disputed
by
are black. 128 Dwight Van De Vate Jr., 1971
A machine isn't a person
That ain't a
person like
That's just
some crummy
129 Anticipated by
Dwight Van De Vate Jr., 1971 130 Dwight Van De Vate Jr.,
67 is supported by with each transaction we make. complex arithmetic whereas unless society deems it one. A us! machine ... 1971
Computers have flow, solar systems and atoms, and in other domains. computers never do. machine or an individual is not a Machines can behave like
Focus Box: The lowest-numbered box in each issue area is an introductory focus box. 75 David Chalmers, Robert French, persons in the imitation Laboratory performance isn't
The arguments on these maps are organized by links that carry a range of meanings: understood analogy. and Douglas Hofstadter, 1995 Note: For more on the Turing test, is supported by person until society collectively enough for full reciprocity of
The focus box introduces and summarizes the core dispute of each issue area, sometimes Existing models have see Map 2. declares it one. This requires game. A machine could treat others
Implemented Model 71 David Chalmers, Robert French, and All-encompassing representations like a person and be treated like a person social behavior. A machine in
as an assumption and sometimes as a general claim with no particular author. discovered and Douglas Hofstadter, 1995 could not be processed. The all- having a gender, a flesh-and-blood a lab playing the imitation game is
understood analogies. body, the ability to feel pain, and so in an imitation game.
Arguments that uphold or defend another claim. Examples include: 72 Keith Holyoak and is SME's treatment of relations is too purpose representation that a front-end Note: Also, see the "Can the imitation not yet a person because it is not
disputed rigid. In SME, relations are treated as n-place module would provide to a computer forth. If a machine lacks any of really being treated like one. It's
supporting evidence, further argumentation, thought experiments, Arguments With No Authors: Arguments that are not attributable to a particular source is supported by Paul Thagard, 1989 these—if, for example, it is is game determine whether computers can
is supported by by predicates that can only be mapped to other model would have to encode a vast think?" arguments on Map 2. treated like an artifact in an
extensions or qualifications, and implemented models. ACME. ACME is a disputed is
(e.g., general philosophical positions, broad concepts, common tests in artificial intelligence) connectionist network that n-place predicates. For example, attraction is a amount of information, enough for it to disembodied and can't feel pain—it
by experiment, which we can unplug
won't be recognized as or treated as disputed
are listed with no accompanying author. discovers cross domain 2-place predicate that could be represented as adapt to all the various contexts and 93 Anticipated by and ignore as we see fit.
a person. by
is supported by analogical mappings. The "attracts (sun, planet)" and then mapped to analogies it might be used in. Such a is supported by Alan Turing, 1950 You see? I'm a human
ACME network uses structural, "attracts (nucleus, electron)." But it is unlikely representation would be too bulky for Computers can't just like you! I won the Let's go to
A charge made against another claim. Examples include: Citations: Complete bibliographic citations can be found in the booklet that accompanies semantic, and pragmatic that the human mind is so rigid in its treatment efficient processing. think about game, I fooled you every
is
disputed
logical negations, counterexamples, attacks on an argument's
emphasis, potential dangers an argument might raise, thought
this map. constraints to seek out those
mappings.
of relational mappings. themselves.
94 Alan Turing, 1950
is supported by time, I can do anything a
lunch.
...
by Computers cannot be human ...

You can order artist/researcher experiments, and implemented models. Methodology: A further discussion of argumentation analysis methodology can be found the object of their is Computers can be the subject of their own
is supported by
76 David Chalmers, Robert French, own thoughts. disputed thoughts. When a computer solves equations, the
in the booklet that accompanies this map. is and Douglas Hofstadter, 1995 by equations can be said to be the object of its thought.
Implemented Model disputed Perception depends on analogy. Similarly, when a computer is used to predict its own
by How we see things depends in part on behavior or to modify its own program, we can say that it is
is interpreted as Anticipated by Where this phrase appears in a box, it identifies a potential

signed copies of all seven maps


77 Douglas Hofstadter and what high-level analogical processes we the object of its own thoughts.
A distinctive reconfiguration of an earlier claim. attack on a previous argument that is raised by the author Melanie Mitchell, 1995 use. For example, Saddam Hussein will
COPYCAT. COPYCAT is a model be perceived quite differently depending you so I
so that it can be disputed. that discovers analogies using 3 on whether he is viewed as analogous to treat treat
components: (1) a "slipnet" of abstract That's not the same Sensory data Adolf Hitler (a ruthless aggressor) or to 95 Anticipated by 96 Alan Turing, 1950

from www.macrovu.com for As articulated by Where this phrase appears in a box, it identifies a reform- 73 David Chalmers, 74 David Chalmers, 131 Dwight Van De Vate Jr.,
Platonic concepts whose relations can as understanding that Robin Hood (a generous crusader). Alan Turing, Diversity of behavior depends only on storage I 1971
change as the model runs, (2) a Robert French, Robert French, and is supported by you
Unmapped Territory This icon indicates areas of argument that lie on or near the ulation of another author's argument. The reformulation and Douglas Hofstadter, Socrates is like a 1950 capacity. Great diversity of behavior is possible for Reciprocity of social
"workspace" of perceptual activity midwife. Douglas Hofstadter, Computers can't machines if they have large enough storage capacities.
Additional boundaries of the central issue areas mapped on these maps. is different enough from the original author's wording to that acts like a short-term memory, 1995 1995 Front end exhibit much The objection is based on the misconception that it is
behavior is required for
warrant the use of the tag. This phrase is also used when ACME doesn't understand (a (b )), The front-end like personhood. Persons must:
It marks regions of potential interest for future mapmakers • be capable of treating

$500.00 plus shipping and han-


arguments and (3) a "coderack" of agents that diversity of not possible for a machine to have much storage
are probabilistically selected to carry analogy. ACME's claim to (c (d )) ... is assumption is dubious. is like
and explorers. the original argument is impossible to locate other than in understand analogies is similar to Models that use
behavior. Humans
disputed
capacity. others like persons in a
out tasks in the workspace. can display much
its articulation by a later author (e.g., word of mouth), or overblown. All ACME does is (A (B )), preconfigured by variety of contexts;
to denote a general philosophical position that is given a
COPYCAT is neither a symbol
take algebraic sentences in (C (D )) representations and hand-
more diversity of
me • be treated like a person by
manipulator nor a connectionist is supported by ....Lady Buxleythatwas is supported by behavior than members of society in a

dling.
predicate logic notation and tailored data assume a you
special articulation by a particular author. network, though it draws on both
compare them. For example, near James.module
James Computer machines ever will. variety of contexts.
paradigms. Representations are not separate front-end
delivered hand-tailored to the model, it only understands that couldcaressed Lady
be built that would model
is
treat One of 7 in this Issue Mapping™ series—Get the rest!
"Socrates is like a midwife" to Buxley data
filter sensory with into
passion.
the dispu treat
The Issue Mapping™ series is published by MacroVU Press, a division of MacroVU, Inc. but are built up through fluid ted
© 1998 R. E. Horn. All rights reserved. interactions between low-level and the extent that it understands model's representational you The remaining 6 maps in this Issue Mapping™ series can be ordered with MasterCard,
MacroVU is a registered trademark of MacroVU, Inc. Issue Map and Issue Mapping are trademarks me
high-level components. that "(a(b)), (c(d)) ... is similar form. James was Lady by so
VISA, check, or money order. Order by phone (206–780–9612), by fax (206–842–0296),
Version 1.0 of Robert E. Horn. to (A(B)), (C(D))."
or through the mail (Box 366, 321 High School Rd. NE, Bainbridge Island, WA 98110).

Das könnte Ihnen auch gefallen