Beruflich Dokumente
Kultur Dokumente
1
As Jerome Kagan defines it in his paper “In Defense of Qualitative Changes in Development” (2008), a
schema is “a representation of the patterned perceptual features of an event… The young infant’s
representation of a caretaker’s face is an obvious example. The primary features of schemata for visual
events are spatial extent, shape, color, contrast, motion, symmetry, angularity, circularity, and density of
contour.”
2
actively compared to another and similarities or differences between them are noted.” More
specifically,
Eric Margolis and Stephen Laurence’s article “In Defense of Nativism” (2013) makes you
seriously wonder how anyone could be perverse enough to oppose nativism (i.e. a kind of
rationalism, all the Chomskian stuff about innateness, poverty of the stimulus, the modularity of
the mind, etc.). Here’s an interesting piece of evidence:
One especially vivid type of poverty of the stimulus argument draws upon the
results of isolation experiments. These are empirical studies in which, by design,
the experimental subjects are removed from all stimuli that are related to a normally
acquired trait. For example, Irenäus Eibl-Eibesfeldt showed that squirrels raised in
isolation from other squirrels, and without any solid objects to handle,
spontaneously engage in the stereotypical squirrel digging and burying behavior
when eventually given nuts. Eibl-Eibesfeldt notes that, “the stereotypy of the
movement becomes particularly obvious in captivity, where inexperienced animals
will try to dig a hole in the solid floor of a room, where no hole can be dug. They
perform all the movements already described, covering and patting the nut, even
though there is no earth available.” Since the squirrels were kept apart from their
conspecifics, they had no exposure to this stereotypical behavior prior to exhibiting
it themselves—the stimulus was about as impoverished as can be. Reliable
3
Are humans supposed to be the only species that isn’t “programmed” to exhibit specific types of
behavior and acquire specific types of knowledge??
Apparently the name of that argument is “the argument from animals”: “The argument
from animals is grounded in the firmly established fact that animals have a plethora of specialized
learning systems… But human beings are animals too.” What’s true of other animals should be
true of us. And, indeed, it is: just as other animals tend to be pretty stupid in many ways, human
beings tend to be pretty stupid too, as shown by the fact that large numbers of them are empiricists.
Being sensible, the authors argue that some concepts, too, are, loosely speaking, innate.
“Likely contenders include concepts associated with objects, causality, space, time, and number,
concepts associated with goals, functions, agency, and meta-cognitive thinking, basic logical
concepts, concepts associated with movement, direction, events, and manner of change, and
concepts associated with predators, prey, food, danger, sex, kinship, status, dominance, norms, and
morality.” Yes! I love bold statements like this, especially when they’re highly plausible.
From Alison Gopnik’s “The Post-Piaget Era” (1996): “Piaget’s thesis that there are broad-
ranging, general stages of development seems increasingly implausible. Instead, cognitive
development appears to be quite specific to particular domains of knowledge. Moreover, children
consistently prove to have certain cognitive abilities at a much earlier age than Piaget proposed, at
least in some domains. Also contra Piaget, even newborn infants have rich and abstract
representations of some aspects of the world. Moreover, Piaget underestimated the importance of
social interaction and language in cognitive development. Finally, Piaget’s account of assimilation
and accommodation as the basic constructivist mechanisms now seems too vague.” I’m skeptical
of neo-Piagetian constructivism, such as “the theory theory,” “the idea that cognitive development
is the result of the same mechanisms that lead to theory change in science.” Evidence,
counterevidence, testing of the theory (or representation), the invention of auxiliary hypotheses,
4
the construction of a new theory, etc. The child’s brain is like a scientist revising theories in the
light of evidence. In a loose sense, this must be true: children constantly ask questions, they
interpret and try to understand the world, they learn to make inferences about what’s going on in
other people’s heads, they revise their beliefs, and so on. In fact, humans continue to do this
throughout life. Organized science is but a more sophisticated version of cognitive processes (of
inquisitiveness, ‘theory’-formation, informal testing) we frequently engage in even without being
reflectively aware of it. But there’s at least one obvious difference between science, even folk
science, and early cognitive maturation: the former is a highly conscious activity, while the latter
takes place even before the young child is (self-)conscious at all or remotely capable of advanced
conceptualizing. The infant’s and toddler’s brain unconsciously constructs representations of the
world, not through conscious reflective processes but through utterly mysterious, natural,
‘spontaneous’ [to use a misleading word] perceptual-cognitive-neural processes that ‘grow’
unbelievably complex and ordered structures of vision, object perception, linguistic syntax and
semantics, musical sensitivity, etc. Such processes of even physical brain-growth—neurons
forming trillions of connections with each other so as eventually to build stable, coherent, and
integrated representations of the world—appear to have virtually nothing in common with
scientists’ self-conscious construction of theories.2
Much more plausible than the theory theory is Fodor’s (nativist) modularity conception of
the mind, or Chomsky’s principles and parameters approach.
And then in Nora S. Newcombe’s paper “The Nativist-Empiricist Controversy in the
Context of Recent Research on Spatial and Quantitative Development” (2002), you have the claim
that “the key question in the nativist-interactionist controversy is whether environmental
2
In another paper, Gopnik observes that “to many philosophers the very idea that children could be
employing the same learning mechanisms as scientists generates a reaction of shocked, even indignant,
incredulity.” She continues: “For some reason, I’ve found this initial reaction to be stronger in philosophers
than in psychologists or, especially, practicing scientists, who seem to find the idea appealing and even
complimentary.” Hm. Maybe philosophers do, after all, tend to be relatively perceptive thinkers?
Personally, I find it hard to fathom how someone could be so wrongheaded as to think that infants—whose
brains are, in a sense, forming!—and scientists are using the very same learning mechanisms. Incidentally,
here’s another example of stupidity: some of Chomsky’s critics have objected that language is socially
constructed, part of a rich social context, not an individual psychological phenomenon. Half a moment’s
reflection should have suggested to them that even if language does depend crucially on social interactions,
it “nevertheless ultimately depends on individual cognitive processes in individual human minds,” to quote
Gopnik. The idea that Chomsky’s “Cartesianism” forbids recognition of the central importance of
interactions with others is, well, stunningly dumb.
5
interactions are or are not required to produce mature competence. Nativists believe that adaptation
was ensured by highly specific built-in programs, whereas interactionists suggest that inborn
abilities do not need to be highly specific because, for central areas of functioning, the
environmental input required for adaptive development is overwhelmingly likely. Evidence favors
the latter solution to the adaptational problem.” Of course. You need some sort of interactions with
the environment in order for capacities to mature in the way they’re ‘programmed’ to do! I don’t
see how that contradicts nativism.
Reading articles on David Marr’s theories of vision. Not very easy. But stimulating and
plausible. (Three levels: computation, algorithm, implementation. I don’t see how the whole
computational-representational conception of the mind, at least on some interpretation, can fail to
be true. In order for the human being to interact with the environment, the brain has to deal with
various levels and types of representations, and it has to carry out complex computations on the
raw data it receives, so as to work up the data into something usable. Mathematics is the language
of nature; calculations (computations) are how the brain processes and interprets the nature it
‘receives’ through the senses. (E.g., in order for us to walk, the brain has to constantly calculate
where external objects are in relation to the body.) Maybe I’m just irredeemably stupid, but I don’t
see what real alternative there is to this picture. I have to admit I don’t have the foggiest idea of
how neural activities could carry out calculations, or even what it means to say they do, but clearly
they must. Actually, the whole thing is one big goddamn mess, and scientists/philosophers haven’t
even got clear on the basics yet.)
In short, “Understanding how the brain works requires an understanding of the rudiments of
information theory, because what the brain deals with is information.”
“For decades, neuroscientists have debated the question whether neural communication is
analog or digital or both, and whether it matters… Our hunch is that information transmission and
processing in the brain is likewise ultimately digital. A guiding conviction of ours – by no means
generally shared in the neuroscience community – is that brains do close to the best possible job
with the problems they routinely solve, given the physical constraints on their operation. Doing
the best possible job suggests doing it digitally, because that is the best solution to the ubiquitous
problems of noise, efficiency of transmission, and precision control.” They make this observation
because the modern theory of computation is cast entirely in digital terms (digital means
information is carried by a set of discrete symbols, rather than something continuously variable),
so in order for the theory to be relevant to the brain, the brain has to work digitally. Incidentally,
the genetic coding of information is a digital, not an analog, system.
Sometimes I like to just sit in awe. The brain has to interpret the world in such a way that
the organism can interact with its environment. So what it does, necessarily, is to give an
appearance to something that in fact, in itself, has no appearance. What perceptual appearances
are (of course) is just a vast amount of information about the world, information that enables us to
8
survive. Information presented in the form of colors, sounds, tastes, smells, etc. There are
structures in the world that correspond to what we perceive, while yet being so radically different
as to lack the defining element of what we perceive, namely appearance. (Our complete incapacity
to imagine something that (in itself) lacks an appearance only shows how inadequate our cognitive
apparatus is to truly understand the world.) Anyway, I just find it mind-bending to contemplate
that the brain, by carrying out computations on gigabytes of data it continuously receives, gives an
appearance to the world. From billions of action potentials you get…a breathtaking sunset.
So far, most of the book I find mind-numbing and almost incomprehensible. All the details
about information theory, functions, symbolization, all the mathematics…my brain doesn’t
‘function’ in this way. Here’s a refreshing and useful paragraph:
Useful paper: “Doing Cognitive Neuroscience: A Third Way” (2006), by Frances Egan and
Robert J. Matthews. Defends the dynamic-systems approach Gallistel criticizes, but has
information on two other dominant paradigms as well. The first is what most neuroscientists do:
study what neurons are doing, what goes on biologically at the lowest levels. This is what Gallistel,
David Marr, Chomsky, and other cognitive scientists criticize on the following grounds: “The
persistent complaints of computational theorists such as Marr seem borne out, that merely mucking
9
around with low-level detail is unlikely to eventuate in any understanding of complex cognitive
processes, for the simple reason that bottom-up theorists don’t know what they are looking for and
thus are quite unlikely to find anything. Sometimes the problem here is presented as a search
problem: How likely it is that one can find what one is looking for, given the complexity of the
brain, unless one already knows, or at least has some idea, what one is looking for. At other times
the problem is presented less as an epistemological problem than as a metaphysical problem:
cognitive processes, it is claimed, are genuinely emergent out of the low level processes, such that
no amount of understanding of the low-level processes will lead to an understanding of the
complex cognitive processes.” That seems plausible to me.
As Marr said, “trying to understand perception by studying only neurons is like trying to
understand bird flight by studying only feathers. It simply cannot be done.” The neural level only
implements higher-level cognitive principles and operations. So the other approach, of course, is
Marr and Gallistel’s, the top-down one. The authors of the article criticize these “cognitivists,” on
the other hand, for assuming that their account of mental processes will be found someday to
“smoothly reduce to neuroscience”—or rather that neural mechanisms of implementation will be
discovered—which Egan and Matthews think is false.
If both paradigms are mistaken, there must be a third one, ‘between’ the two. “The neural
dynamic systems approach is based on the idea that complex systems can only be understood by
finding a mathematical characterization of how their behavior emerges from the interaction of their
parts over time.” One possible manifestation of the dynamic systems approach is dynamic causal
modeling. “DCM undertakes to construct a realistic model of neural function, by modeling the
causal interaction among anatomically defined cortical regions (such as various Brodmann’s areas,
the inferior temporal fusiform gyrus, Broca’s area, etc.), based largely on fMRI data. The idea is
to develop a time-dependent dynamic model of the activation of the cortical regions implicated in
specific cognitive tasks. In effect, DCM views the brain as a dynamic system, consisting of a set
of structures that interact causally with each other in a spatially and temporally specific fashion.
These structures undergo certain time-dependent dynamic changes in activation and connectivity
in response to perturbation by sensory stimuli, where the precise character of these changes is
sensitive not only to these sensory perturbations but also to specific non-stimulus contextual inputs
such as attention. DCM describes the dynamics of these changes by means of a set of dynamical
10
equations, analogous to the dynamical equations that describe the dynamic behavior of other
physical systems such as fluids in response to perturbations of these systems.”
And so on, in the same vein. Eventually: “Let us summarize, briefly, some implications of
the neural dynamics systems approach, as illustrated by DCM, for the problem of explaining
cognition. Notice first, and most importantly, it explains cognition in the sense that it gives an
account of the phenomena in neural terms, emphasizing a break with the ‘cognitivist’ assumption
that to be an explanation of cognition is necessarily to be a cognitive explanation, where by the
latter one means an explanation that traffics in the usual semantic and intentional internals [internal
states] dear to cognitivists. Second, the account leaves open the question of whether there is any
interesting mapping between cognitive explanations and neural explanations of cognitive
phenomena—maybe there is, but then again maybe not…” But if supposedly you can explain
cognition without mentioning representations, computations, and all that cognitivist stuff…well
then is cognitive science totally misguided and unnecessary? That hardly seems possible, given its
successes. If there isn’t a way to bridge the divide between neuroscience and cognitivism, I don’t
see how we’ll ever really understand the cognitive. But I guess this is just another manifestation
of the mind-body problem, and as such might be insuperable.
Here’s something interesting from Gallistel’s book: apparently it’s plausible that the brain
is able to store a trillion gigabytes of memory, “the equivalent of well over 100 billion DVDs.”
Gives you some idea of only one of the differences between human brains and the most powerful
modern computers.
Nice: “We suspect that part of the motivation for the reiterated claim that the brain is not a
computer or that it does not ‘really’ compute is that it allows behavioral and cognitive
neuroscientists to stay in their comfort zone; it allows them to avoid mastering…confusion [about
all things computational], to avoid coming to terms with what we believe to be the ineluctable
logic of physically realized computation.” I’m sure that’s right. Computer science is damn
difficult, and people who have chosen to spend their careers on something different don’t want to
have to get into it. (I doubt Gallistel is popular among neuroscientists. “He’s challenging the way
we do things! Criticizing us! Interloper!”)
Summing up chapter 10: “A mechanism for carrying information forward in time is
indispensable in the fabrication of an effective computing machine. Contemporary neurobiology
has yet to glimpse a mechanism at all well suited to the physical realization of this function.” So
11
neurobiology is very far from understanding memory. It doesn’t even have an inkling of the
mechanisms yet (much less a fleshed-out understanding from a macro perspective).
Chapter 11 is about the nature of learning. Currently (in cognitive science and its related
disciplines) there are two different conceptual frameworks for thinking about learning. “In the first
story about the nature of learning, which is by far the more popular one, particularly in
neurobiologically oriented circles, learning is the rewiring by experience of a plastic brain so as to
make the operation of that brain better suited to the environment in which it finds itself. In the
second story, learning is the extraction from experience of information about the world that is
carried forward in memory to inform subsequent behavior. In the first story, the brain has the
functional architecture of a neural network. In the second story, it has the functional architecture
of a Turing machine.” Needless to say, the authors favor the second story.
They have a nice little summary of the philosophical/psychological doctrine of
associationism, and later behaviorism, that provides the historical and theoretical context for the
first story about the nature of learning:
the one idea to the other. When the first idea was aroused, it aroused the second by
way of the associative connection that experience had forged between the two ideas.
Moreover, he argued, the associative process forged complex ideas out of simple
ideas. Concepts like motherhood were clusters of simple ideas (sense impressions)
that had become strongly associated with each other through repeated experience
of their co-occurrence.
There is enormous intuitive appeal to this concept. It has endured for
centuries. It is as popular today as it was in Locke’s day, probably more popular…
The influence on Pavlov of this well-known line of philosophical thought
was straightforward. He translated the doctrine of learning by association into a
physiological hypothesis. He assumed that it was the temporal proximity between
the sound of food being prepared and the delivery of food that caused the rewiring
of the reflex system, the formation of new connections between neurons. These new
connections between neurons are the physiological embodiment of the
psychological and philosophical concept of an association between ideas or, as we
would now say, concepts. He set out to vary systematically the conditions of this
temporal pairing – how close in time the neutral (sound) stimulus [like a bell
ringing] and the innately active food stimulus had to be, how often they had to co-
occur, the effects of other stimuli present, and so on. In so doing, he gave birth to
the study of what is now called Pavlovian conditioning. The essence of this
approach to learning is the arranging of a predictive relation between two or more
stimuli and the study of the behavioral changes that follow the repeated experience
of this relationship. These changes are imagined to be the consequence of some
kind of rewiring…
The theory of rewiring isn’t representational. There is no place for symbolic memory, which means
no mechanism for carrying forward in time the information gleaned from past experience.
The authors argue, on the other hand, that “learning is the extraction from experience of
symbolic representations, which are carried forward in time by a symbolic memory mechanism,
until such time as they are needed in a behavior-determining computation.” They argue for this by
13
considering the phenomenon of dead reckoning, a simple computation that (from behavioral
evidence) is almost certainly implemented in the brains of a wide variety of animals.
In a later chapter they also take issue with the popular, empiricist-inspired idea that there’s
a single basic learning mechanism—a “general-purpose learning process”—or at any rate a small
number of such mechanisms. They prefer the modular hypothesis rooted in zoology and
evolutionary biology, the hypothesis that there are adaptive specializations. “Each such
mechanism constitutes a particular solution to a particular problem. The foliated structure of the
lung reflects its role as the organ of gas exchange, and so does the specialized structure of the
tissue that lines it. The structure of the hemoglobin molecule reflects its function as an oxygen
carrier…” Adaptive specialization of mechanism is ubiquitous and obvious in biology, simply
taken for granted. From this perspective, it’s strange that most past and present theorizing about
learning is opposed to the idea.
“A computational/representational approach to learning, which assumes that learning is the
extraction and preservation of useful information from experience, leads to a more modular, hence
more biological conception of learning. For computational reasons, learning mechanisms must
have a problem-specific structure, because the structure of a learning mechanism must reflect the
computational aspects of the problem of extracting a given kind of representation from a given
kind of data. Learning mechanisms are problem-specific modules – organs of learning – because
it takes different computations to extract different representations from different data.”
Again, all this seems so obvious to me I’m puzzled (sort of) that it’s a minority view. The
details of the authors’ reasoning are sometimes hard to follow, but intuitively their position seems
extremely plausible.