Sie sind auf Seite 1von 24

Philosophical Foundations

Chapter 26

Searle v. Dreyfus argument


Dreyfus argues that computers will never be
able to simulate intelligence
Searle, on the other hand, allows that computers
may someday be able to pass the Turing test
[they can simulate human intelligence]

However, does this mean that computers are


intelligent? Is simulation duplication?
Attack on the Turing test
Directed against strong AI

Simulation v. duplication
No one would suppose that we could produce
milk and sugar by running a computer
simulation of the formal sequences in
lactation and photosynthesis
No one supposes that computer simulations of
a five-alarm fire will burn the neighborhood
down or that a computer simulation of a
rainstorm will leave us all drenched

Motivation

The Turing test is an inadequate determination of


intelligence [only deals with simulation]
The Turing test is an example of behaviorism

states are defined by how they make people act

happiness is acting happy


love is acting in a loving manner
intelligence is acting in an intelligent manner
to understand the word knife means to be able to use it

But behaviorism is inadequate

since love & happiness are more than simply the way in
which a person acts, so too must be intelligence
to be x, a person must be in the correct state

Is behaviorism plausible?
Dualism is the belief that there are two substances
that make up human beings: minds & bodies
These two substances are absolutely different &
incompatible
Thus, to understand the mind we need not concern
ourselves with the body

the mind can be abstracted from its implementation in


the brain [behaviorism]

Does AI thus subscribe to dualism?


Dualism is rejected by most philosophers

Alternative to dualism
Biological naturalism says that
consciousness, intentionality, etc. are
caused/produced by the brain in the same
way that bile is produced by the stomach

there thus arent two substances


rather, the so-called mental phenomena is
simply a result of the physical process
realism?

There is something essentially biological


about the human mind

Argument
To show behaviorism is inadequate for
understanding/consciousness, Searle designed a famous
thought experiment in which he is locked in a room
Under the door are slipped various Chinese characters
which he does not understand
In the room with him is a rule set (in English) that tells
him how to manipulate the characters that come under
the door and what characters to slip back under the
door, and a pad of paper for making intermediate
calculations

Argument continued
The Chinese characters slipped under the door
are called stories and questions by the
people providing them
The characters that Searle returns to the
outside world are called answers
The answers perfectly answer the questions
about the stories that he was given
To an outside observer, it appears that Searle
understands Chinese!

Argument concluded
However, it is manifest [given] that Searle
doesnt understand the stories, the questions
or the answers he is giving

he doesnt understand Chinese!

Thus, since intelligence requires a state of


understanding (the story must mean
something to you), Searle cant be said to
understand Chinese although he gives the
correct answers

correct input/output, but no understanding

Conclusions
Similarly, just because a computer can produce
the correct answers doesnt mean that it is
intelligent
Merely manipulating meaningless symbols is
inadequate for intelligence; a state of
intelligence (intentionality) is also needed

what does it mean when I say x is intelligent?


problems with the behaviorist definition

Thus, a computer can pass the Turing test and


still not be said to be intelligent

Abstracting the argument [givens]

Brains cause minds [empirical fact]


Syntax [formalism] is not sufficient for semantics
[contents]

syntax & semantics are qualitatively different aspects & no


qualitative increase of the former will ever produce the latter

Computer programs are entirely defined by their formal,


or syntactical, structure [definition]

the symbols have no meaning; they have no semantic content;


they are not about anything

Minds have mental contents; specifically they have


semantic contents [empirical fact]

Conclusions I
No program by itself is sufficient to give a
system a mind. Programs, in short, are not
minds, and they are not by themselves
sufficient for having minds
The way that brain functions cause minds
cannot be solely in virtue of running a
computer program

Conclusions II
Anything else that caused minds would have
to have causal powers at least equivalent to
those of the brain
For any artifact that we might build which had
mental states equivalent to human mental
states, the implementation of a computer
program would not by itself be sufficient.
Rather the artifact would have to have powers
equivalent to the powers of the human brain

The expected conclusion


The brain has the causal power to give rise to
intentional [semantic] states
Computer programs cant give rise to intentional
[semantic] states since theyre only syntax
Thus, computer programs are not of the same
causal power as brains
Thus, computer programs cant give rise to the
mind & consciousness

Objections
Systems reply

Russell & Norvig

Robot reply
Brain simulation reply
Other minds reply

Systems reply
Objection: Perhaps not the man in the room,
nor the rules in English, nor the scratch paper
understand anything, but the system taken as
a whole can be said to understand
Answer: Put the room within a single person

make the person memorize the rules, etc.


thus, there is no system
the person still cant be said to understand
syntactic information processing sub-systems
cant give rise to semantic content [cant be
called intelligent]

Information processing
Further, it seems that if all we are requiring for
intelligence is information processing, then
everything can be seen as doing information
processing
But this leads to a contradiction

we dont want to say that the stomach or a


thunderstorm is intelligent
the stomach takes in something [food], processes it
[digests it], and puts something out [energy]
but if our definition of intelligence is that it is
information processing, why isnt the stomach
intelligent?

Russell & Norvig


Certain kinds of objects are incapable of conscious understanding
(of Chinese)
The human, paper, and rule book are of this kind
If each of a set of objects is incapable of conscious
understanding, then any system constructed from the objects is
incapable of conscious understanding
Therefore, there is no conscious understanding in the Chinese
room [as a whole]
But molecules, which make up brains, have no understanding [cf.
Brain simulation reply]

Robot reply
Objection: If a robot was perceiving & acting
in the world, then it would be intelligent

intentionality arises from being in a world

Answer: Put the Chinese room in the robots


head

give the robots perceptions to the Chinese room as


Chinese characters & give the directions to the
robot in terms of Chinese characters
we are in the same spot we were before: no
intentionality because everything is still happening
formally

Brain simulation reply


Objection: Simulate the actual sequence of
neuron firings at the synapses of the brain of a
native Chinese speaker when he understands
stories in Chinese and gives replies to them
Answer: This simulates the wrong things about
the brain

As long as it simulates only the formal structure of


the sequence of neuron firings at the synapses, it
wont have simulated what matters about the brain,
namely its causal properties, its ability to produce
intentional states

Other minds reply

Objection: How do we know someone


understands Chinese? --Only by their behavior
Answer: The problem in this discussion is not
about how I know that other people have
cognitive states, but rather what it is that I am
attributing to them when I attribute cognitive
states to them

The thrust of the argument is that it couldnt be


just computational processes and their output
because the computational processes and their
output can exist without the cognitive state

Minds & machines


Machines can think; we just are machines!
However, computational processes over
formally defined elements is insufficient

i.e., a computer program is insufficient


formal elements cant give rise to intentional states
they can only give rise to the next state in the
computational device
only syntax, no semantics
interpretation is in the eyes of the beholder

Meaning of the Chinese Room


Point of the Chinese room example: adding a formal
system doesnt suddenly make the man understand
Chinese

The formal system doesnt endow the man with intentionality


vis--vis Chinese
Why would we expect it to endow a computer with
intentionality?

E.g., Computers dont know that 4 means 4


Only more & more symbols; never grounds out on
meaning

Conclusion
We (I) dont understand what
consciousness or self-awareness is
If it flies like a duck, swims like a duck,
walks like a duck, and quacks like a duck . .

Das könnte Ihnen auch gefallen