Sie sind auf Seite 1von 20

Consciousness and the Embodied Self Andrew Bailey Philosophy Department University of Guelph Guelph ON N1G 2W1 Canada

abailey@uoguelph.ca 519-824-4120 x53227 Abstract: This paper deals with the relationship between the embodied cognition paradigm and two sets of its implications: its implications for the ontology of selves, and its implications for the nature and extent of phenomenal consciousness. There has been a recent wave of interest within cognitive science in the paradigm variously called embodied, extended, situated or distributed cognition. Although ideas applied in the embodied cognition research program can be traced back to the work of Heidegger, Piaget, Vygotsky, Merleau-Ponty, and Dewey, the current thesis can be seen as a direct response and, in some cases, a proposed alternative to the cognitivist/classicist rule-based, information-processing model of cognition. Embodied cognition, by contrast, arises from real-time, goal-oriented bodily interactions with the world. I lay out three relations: the implications of embodiment for consciousness; the implications of embodiment for the self; and the tension between these two. I argue that the embodiment paradigm introduces a radical split between consciousness and the self, and that it does so by deflating our pre-theoretical instincts about consciousness and self in two different directions; however, I claim, what both these theoretical movements have in common is a scepticism about the notion of a psychological container defining a boundary between inside and outside. (203 words)

What is Embodied Cognition? In recent years that has been a wave of interest within cognitive science in the paradigm variously called embodied, extended, situated or distributed cognition. Although ideas applied in the embodied cognition research program can be traced back to the works of Heidegger, Piaget, Vygotsky, Merleau-Ponty, and Dewey, the current thesis may be seen as a direct response and, in some cases, a proposed alternative to the dominant cognitivist/classicist view of the mind. Cognitivism can be defined as a rule-based, information-processing model of cognition that: 1) characterizes problem-solving in terms of matching given inputs to appropriate outputs; 2) assumes the existence of symbolic, encoded representations which enable the system to devise a problem solution by means of computation; and 3) maintains that cognition can be understood by focusing primarily on an organisms internal cognitive processes (i.e., specifically those involving computation and representation). Thus, cognitivism holds that thinking consists in the manipulation of symbols according to explicit rules. Historically, for cognitivism, thinking is contrasted sharply with sensing and acting, neither of which are to be accounted for in terms of symbol-manipulation but instead are merely biological or mechanical. Often, a distinction is drawn between humansand possibly other thinking creaturesand lower animals that merely sense and act and are not intelligent at all. Work in embodied cognition, by contrast, asserts that cognition arises fromor perhaps is enacted byreal-time, goal-oriented bodily interactions with the world. From this point of view, the manner in which organisms are embodied constrains and perhaps determines their cognition; cognition is situated and possibly off-loaded onto the environment; and it may even be held that off-line cognition, such as planning,

dreaming or metaphysical musing, is also body-based. Thus, no sharp distinction is made between sensing, thinking, and acting. Furthermore, the kind of intelligence displayed by human beings is best seen as continuous with that exemplified in simpler creatures, such as insects and robots. The devotees of embodied cognition present a far from unified front, however. They are influenced, to varying degrees, by empirical work in a diverse range of fields. 1) From robotics and artificial life research they have inherited the notion that sensory and motor systems may be the grounding level for cognition, rather than the activities of a central processing unit. Instead of attempting to solve computations in order to globally control behavior at an abstract level of knowledge representation, this bottom-up or subsumption architecture approach relies on the self-organization of simpler components to produce a variety of emergent behaviors that depend on the dynamical, nonlinear interaction of a robot with its environment. These systems do not depend upon sending inputs to a central processor that unifies and processes all the information for a particular time slice, and that then returns outputs to the peripheral systems; rather, all or much of the processing is done at the peripheries. (Beer and Chiel 1993; Brooks 1986, 1991a, 1991b; Chiel and Beer 1997, Clark 1997) Consider, for example, a sequence of increasingly autonomous robot hexapodssixlegged locomotion unitsconstructed in the early 1990s at MIT and Case Western Reserve University (Beer et al. 1997, Ferrell 1995). In these robots, each leg has its own simple controller; a base level of activation drives the movement of the leg, and this activity is modulated by sensory feedback from the terrain the leg encounters. Thus, each leg is individually responsive to its environment. Furthermore, no additional controller is required to coordinate the activity of the six legs in order to make the robot walk across rough ground and around obstacles. Instead certain patterns of inhibitory feedback links between neighbouring legs turn out to be sufficient to produce intelligent, responsive

locomotion. In this way, quite sophisticated adaptive behaviour can emerge from the appropriately arranged interaction of peripheral bodily components. 2) From the psychology of perceptionespecially from phenomena discovered or emphasised in the past few decades such as change blindness and inattentional blindnesscomes the insight that perception is not a passive encoding of data streaming from the environment, but instead is active and attention-dependent. What we see depends upon how we go about lookinghow we attend, for example. (Blackmore et al. 1995, Dennett 1992, Grimes 1996, Mack and Rock 1998, Mitroff et al. 2004, ORegan et al. 2000, ORegan 1992, Pessoa et al. 1998, Rensink et al. 1997, Rensink et al. 2000, Rensink 2000, Simons and Levin 1997, Simons and Levin 1998) A striking example of this is an experiment known as Gorillas in our midst (Simons and Chabris 1999). In this experiment, subjects were required to watch a short video of two teams of three playersone team in white shirts, the other in black shirtsmilling around in an open area before a bank of three elevator doors and tossing a basketball between themselves. The task the subjects were given was to attend to one of the two teamseither the white-shirts or the black-shirtsand count the number of passes made by members of this team. After the task was completed, subjects were asked for their count, and were also asked a surprise series of additional questions, including whether they noticed anything unusual in the video, or whether they observed anyone other than the six players. Across the various times the test was run, with 192 subjects, 46% of the subjects reported not noticing anything unusual; the proportion was as high as 58% in one version of the test (and as low as 33% in another). Yet something unusual had happened: about half way through the video, a woman dressed entirely in a gorilla suit (or, in some cases, a tall woman holding an open umbrella) walked slowly through the scene from left to right, taking five full seconds to do so. During this time the gorilla was in clear view of the experimental subjects, walking

right through the middle of the players the subjects were concentrating on (i.e. right through the foveal area being attentionally focussed on); and yet up to half of the subjects simply failed to notice this strikingly incongruous stimulus. This finding was replicated even when the gorilla stopped right in the middle of the game, looked at the camera and thumped her chest, and then leisurely exited the picture (now taking nine rather than five seconds to cross the scene)! One implication of this experiment, and many others like it showing the surprising limits on what we notice about visual scenes, seems to be that there is no conscious perception without attention and thus that the limits of attention are also limits of conscious awareness. We do not construct detailed, persisting visual representations of our surroundingsour eyes do not create mental snapshots of the scenes before us. 3) Developmental psychology has contributed a new comprehension of the way brain, body, and local environment interact in complex ways to determine the early cognitive success of infants. Learning to walk, for example, does not consist in the unfolding of some sort of stored and pre-given developmental program (see Zelazo 1984), but instead essentially depends upon the way the environment pushes back against motions of the infants legs (Thelen 1986, 1995, 2000; Thelen and Smith 1994; Thelen et al. 2001; Smith and Gasser 2005). It has long been known that newborn infants instinctively perform stepping motions when they are held suspended above the ground but that this reflex action disappears at about two months of age and reappears at about nine months. Experiments by Esther Thelen and her collaborators have shown that this period of disappearance is not because the reflex goes into abeyance but simply because the infants leg mass overwhelms the action of the muscles for this time; hold a three-month-old baby in a tub of water, for example, and she makes walking motions. Similarly, infants too young to step under normal conditions perform coordinated stepping motions when placed on a

treadmill; they even learn to adjust their step rates to different treadmill speeds, or to having their feet on two different treadmills moving at different speeds. Data of this sort are taken to suggest that learning to walk involves a complex interplay of bodily and environmental, as well as cognitive, factors: walking is a behaviour that emerges from having legs of a certain mass, with muscles operating in certain ways (reflexively springing, or being stretched by the motion of a treadmill), in a particular kind of context, both cognitive (e.g. having volition) and external (e.g. being supported in water). 4) Evolutionary psychology has been taken to suggest that the primary function of the brain, from an evolutionary point of view, is to be the control system of the body; it is not designed to be a tool for describing and explaining reality. (Anderson 2005, Barsalou 1999, Beer et al. 1997, Clark 1997, Clark 1995, Glenberg 1997, Glenberg and Robertson 2000, Mitchell 1999, Pinker 1997, Port and van Gelder 1995) Furthermore, our cognitive system has evolved to respond to an environment in real time, and this temporal pressure, arguably, to a greater or lesser extent constrains what cognition might, biologically plausibly, be. In particular, some argue there is a need to avoid a representational bottleneck in organisms that urgently need to act and react appropriately under significant time pressure. (See the debate pursued in Agre 1993, Beer 2000, Brooks 1991, Markman and Dietrich 2000, Vera and Simon 1993a, b.) A simple example of this kind of representationally-lightweight but fast and effective control mechanism is the system skilled catchers are said to use to catch balls (a fly ball in baseball, for example). Rather than mentally calculating the path of the projectile and on this basis predicting where it will land, a catcher need simply run towards the ball in such a way that her visual image of the ball continually travels upwards in a straight line: if she does this, she will arrive directly underneath the ball as it completes its trajectory (McBeath et al. 1995). This requires the catcher to respond very sensitively to instantaneous visual cuesthere must be a tight, real-time link between the catchers

behaviour and her environmentbut it does not require any complex sub-personal mental modelling of the physics of moving objects. (Furthermore, there is empirical work to suggest that predators adopt similar linear optical trajectory behaviour in order to catch their prey (Collett and Land 1975, Ghose et al. 2006, Lanchester and Mark 1975, Olberg et al. 2000, Shaffer et al. 2004).) 5) Finally, dynamical systems theory provides a potential explanatory framework for psychology that competes with cognitivism: in particular, dynamical systems theory, at least on the face of it, does not allow for any clear-cut distinction between the organism and its environment and thus for a distinction between a cognitive system and its inputs and outputs. (Beer 1995, 2000) Instead, the focus is on the unfolding trajectory of a systems state and the internal and external forces that shape this trajectory, rather than the representational content of its constituent states or the underlying physical mechanisms that instantiate the dynamics. (Beer 2000, 91) This diversity of empirical influences contributes to a diversity of claims made by embodied cognition theorists. They may hew to one or more of the following theses, not all of which are prima facie mutually compatible: 1) Our mental models or maps are more partial than we suppose. Typically, on this view, the partialness of our models is supplementedand rendered hard to noticeby our interactions with the environment (such as visual saccades, or keeping written records). It is possible to treat elements in the environment as supplementary parts of our representational models: that is, as Andy Clark and David Chalmers have influentially put it, perhaps our minds are extended outside our skulls (Clark and Chalmers 1995). The most extreme version of this view is that there are no mental models or maps at all. As roboticist Rodney Brooks famously suggested, the world is its own best model (Brooks 1991). 2) Our mental models or maps are more action-oriented than we have supposed. The

idea that our models are action-oriented is something like the following: instead of creating an internal map of, say, some geographical space, we instead represent landmarks and tie these representations to potential actions at these landmarks (e.g. at this spot I can go left to get to the library, or right to enter the kitchen) (Clark 1997). At the extreme, there are radically enactive views of cognition that equate perceptual content with sensory-motor knowledge: as it is sometimes put, seeing is more like touching than depicting. (Merleau-Ponty, ORegan and No 2001, No 2005) 3) Some cognitive work is off-loaded onto our environment. What counts as mental computation extends outside the skull by utilizing environmental structures (e.g. playing Tetris or Scrabble by manipulating the pieces, doing long multiplication with pencil and paper, using nautical slide rules). This has been called epistemic action. (Kirsh and Maglio 1994) At the extreme, certain ways of thinking are not possible without embedding the organism into a particular sort of environment: cognition must, sometimes or always, be understood as taking place within a unit that includes both the organism and its environment. (Kirsh 1995, Beer 1989, Thelen & Smith 1994, McClelland et al. 1986, Clark 1989, Kirsh 1995, Hutchins 1995, Clark and Chalmers 1998) 4) Our mental models or maps are non-representational: they are not mirrors of nature. This view itself encompasses at least three strands: a) Neurophenomenology. The properties and categories that our minds encode are not read off from the external environment but are constructed through an interaction between the organism and its environment. For example, colour properties, on this kind of view, are neither objective properties of external objects, nor merely subjective or purely mental properties that form a veil of appearances between us and reality. Instead, colour is

(something like) a real set of categories that emerge as a joint consequence of our embodiment and the nature of our environment. Thus our world is partially or completely a matter of codependent arising rather than of representation. At the extreme end of this spectrum is the view that there is in principle no such thing as a representation of the world as it really is and hence, perhaps, there is no mind-independent reality. (Varela, Dreyfus) b) Neo-pragmatism. Representation does not occur in virtue of inner mental tokens standing in some intentional relation to the world: rather, representation is a matter of complex interactions between organisms and/or their social or material environments. Mentality is a matter of complex overt behaviour in a context, not inner coding. (Rorty 1995, Bickhard?, Davidson?, Brandom?) c) Anti-cognitivism. All or much of the processing by sensory and motor systems does not involve the computational manipulation of representations, but instead consists in some sort of complex, mechanical interaction with the environmentfor example, the forces acting on a set of legs. A related claim is that our cognitive architecture isfundamentally, or thoroughly associative or pattern completing rather than classically computational. (Brooks, Clark) 5) Our cognitive capacities and predispositions are strongly influenced by our embodiment (e.g. our relative size, our visual orientation and other facts about our sensory modalities, our bipedalism, our emotional makeup, and so on). The notion of embodiment may include cultural as well as biological factors, such as facts about the natural languages we speak or theories we have inherited. At the extreme, our cognitive capacities are unavoidably fully determined by our embodiment. (Lakoff, Johnson)

10

6)

The distinction between the organism and its environment is a less necessary or productive dualism than we have supposed. This might be taken to be a consequence of one or more of 1) to 5), or it might be a thesis that is treated as self-standing. The causal patterns that explain behaviour are not usefully divided into inner and outer components; the isolated organism is not a useful unit of analysis.

The Embodied Self What implications does the embodiment paradigm have for thinking about the self? As I have just shown, the embodied mind (hereafter EM) approach is not monolithic, and there are very important differences between the separate strands. Nevertheless, all the main variants of EM share a commitment to the following two key, connected, consequences for the self: 1) EM de-reifies the self; 2) EM smears the virtualised self across the boundary between organism and environment. The blurring of the distinction between sensing, thinking and acting that is characteristic of EM reduces the temptation to partition the flow from world to mind to action into three separate parts, with a central arena that is the domain of the mental as opposed to the bodilythe realm of the self, in contradistinction with the selfs body or environment. Thus, EM de-emphasises the notion of some kind of centralised control of our behaviourit denies there is any central rational module that is ultimately responsible for what we do. And this denial extends not only to the mind, but even to the organism itself. Organisms may be treated as agents, but only in a somewhat dilute post-Cartesian sense whereby the organisms behaviour is a result of dynamic coupling

11

between the organism and its environment.1 Intelligent behaviour in humans is seen as continuous with that, as opposed to importantly distinct from, the appropriate behaviour of simpler organisms and even non-biological systems in relation to their environments. And in these cases, as in Herbert Simons well-known example of an ant traversing a terrain full of obstacles, the complexity of behaviour is often attributable to the complexity of the environment to which the organism is responding rather than the inner complexity of that organism itself. It is not only the notion of agency that EM deflates, but also many of the other aspects of selfhood. In particular, the various strands of EM that either externalise or eliminate mental representations act as a solvent on the scaffolding of selfhood provided by memories, stable beliefs, and personality. Andy Clark, for example, suggests that personal memoriesmemories of people, events and places encountered in ones own autobiographyneed not be located inside the skull but might repose in notebooks, say, or computer files. (Chalmers and Clark) Other, even more radical proponents of EM deny that there even exist inside our skulls contentful discrete representational states that constitute memories or beliefs (or other propositional attitude states). To be clear, EM does not deny the existence of selves, or of memories, personalities or agency. Rather, it recasts these things in a way that tends to be ontologically deflationary, and it refuses to contain them on one side of the skin-world boundary. The tendency in EM is to see selves and their components as aspects of the ways in which complex, indissolubly interrelated, organism-environment systems can be described. Ones personality, ones long-term belief dispositions, are not essential properties of ones mind, or even ones mind and body in combination; instead, they are stable patterns that emerge from the interrelations over time between you and your evolving environment. To put it another away, the components of selfhood do not supervene
1

See Haugeland 1995.

12

onare not fixed byintrinsic facts about the organism, but are instead fixed by facts about the complex system composed of the organism and its environment. And they are so fixed, in general, not because this system contains those components as partsselves, mental attitudes, agencies, even mindsbut because the system as a whole can be accurately described as if such entities were discrete elements within it.

Embodied Mind and Phenomenal Consciousness I now want to change tack and argue that the embodied cognition paradigm entailsor at least strongly suggestsa radical distinction between cognition and phenomenality. In distinguishing between cognition and phenomenality, I mean merely to draw attention to the familiar contrast between what we might call the functional or informationprocessing character of mental states and the what-it-is-likeness of certain conscious mental states. Thus judging that p or deciding to perform action r are cognitive states; the sensation of falling or the taste of shrimp are phenomenal states. I do not intent to prejudge any issues as to whether this distinction picks out two different classes of mental state, let alone whether it is the symptom of a deeper epistemic or metaphysical rift. The case that the duality of cognition and the phenomenal follows from EM has two parts. First, it is always elements of cognition, not phenomenal consciousness, that are externalized or de-reified by EM; the various theses of EM apply to cognitive not phenomenal states. Second, some tendencies in EM, particularly enactivist kinds of neurophenomenology, actually seem to require that phenomenal states exist as states of the organism. On the first point, EM applies only to states that modulate intelligent behaviour with respect to the (proximal or more distal) environmentstates such as perceptual representations, standing beliefs, dispositions to behave in certain ways, and so on. Thus, for example, the cognitive process of planning to re-visit the Museum of Natural History

13

may be partially extended outside the skull, utilizing notes in lieu of internalised memories and action-generating cues from the environment in lieu of a complete mental model of the relevant spaces. The perception of a tree may arise from an interaction between the tree and the embodied and active perceiver. However, none of this goes to show that states of phenomenal consciousnessthe states that, after all, good old fashioned cognitive science has tended mostly to ignore in its explanatory projectsare outside our skulls, or mere artefacts on the surface of complex, dynamical systems, visible only from a distance. The second point goes beyond this. There is some reason to think that certain flavours of EM presuppose the existence of inner, real, phenomenal states, in addition to the distributed, dynamic, embodied and embedded systems they describe. Consider, for example, a neurophenomenological account of colour perception, of the sort given by Varela, Thompson and Rosch in The Embodied Mind. On this kind of view, a key claim is that colour perception does not consist in the passive reception and encoding of pregiven categories from the organisms environment; that is, colour is not a real property that we attempt to mentally represent by mirroring it in the mind. On the other hand, on this kind of view, colour is not a merely mental categorya sort of veil of appearance between us and the real nature of our environment. For neurophenomenology, and related branches of EM, the point of colour perception is not to represent at all; rather, colour arises from our on-going, active interaction with our physical environment, and this interaction itself is shaped by both the physical nature of that environment and the nature of our embodimentour capacities for acting in certain ways, and not in others, with respect to a given environment. The colour we experience, then, is quite real, but it is neither objective nor subjective: it is what is enacted when you put organisms with capacities like ours in a natural (or artificial) environment like this. Colour perception, as it were, carries information about how things change as we act in various waysmove

14

our heads, adjust the lighting, peel the bananabut it does not form part of an inner mental model of a static, given external world. So, let us grant all of this for the sake of argument. We grant that the colour properties we experience are not really there on the surface of objects. We also insist that colour is not a kind of grand illusiona failed attempt to mentally model a colourless external reality. But still, although this might change quite radically our notion of the content or function of colour perception, it does not do away with the phenomenal consciousness of colour experience. When we see a strawberry, the

neurophenomenologist will agree, it looks redindeed, it is red, really, even though this redness may correspond to no stable, mind-independent category. But it is red in virtue of giving rise to a certain kind of conscious experience under certain circumstances and not othersit is red because our active, embodied engagement with the environment includes a particular kind of co-dependently arising consciousness. This phenomenal redness is not in the world; it is in the mind.

Consciousness and the self The embodiment paradigm thus generates the following tension: i) The stream of consciousness is located within the boundaries of the organism. ii) The self (and its memories, beliefs, personality traits, etc.) is not located within the boundaries of the organism. We intuitively equate ourselves with our own consciousness, but this intuition cannot survive the move to embodiment. Our consciousness is not our mind, and if we are our minds we are not our consciousness. Perhaps this tension is so much the worse for EM; perhaps it is a problem for our pre-theoretical notion of the self. Perhaps neither: maybe what it means is that we should simply choose between an old view of the self as

15

consciousness, and a newer view that sees it as an amorphous cloud more or less centred on our bodies. Externalism about the self and about non-phenomenal cognition come from a shared source: scepticism about the notion that mentality consists of objects/states plus a container. For EM, the self is not a container for thoughts and traits; cognition is not a process internal to the organism but one that loops between organism and its environment. It is phenomenal consciousness that fits uncomfortably with this new paradigm: it is structured as states of an internal mediumcolours, sounds, tastes and smells are qualitative mental states, not ways of acting or distributed quasi-representations. And so, EMthough radical and self-consciously modernreintroduces us to an old dualism recast; not the dualism between mind and body, nor that between self and world, but between consciousness and mind.

16

Agre, P.E. (1993). The symbolic worldview: Reply to Vera and Simon. Cognitive Science 17: 6169. Anderson, M.L. (2005). Representation, evolution and embodiment. Theoria et Historia Scientiarum 9.1. Ballard, D.H. (1991). Animate vision. Artificial Intelligence 48: 5786. Ballard, D.H. (1996). On the function of visual representation. In Perception (Vancouver Studies in Cognitive Science), Volume 2, ed. K Akins. New York: Oxford University Press, 111131. Barsalou, L.W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences 22: 577660. Beer, R.D. (1995). A dynamical systems perspective on agent-environment interaction. Artificial Intelligence 72: 173215. Beer, R.D. (2000). Dynamical approaches to cognitive science. Trends in Cognitive Sciences 4: 9199. Beer, R.D., and Chiel, H.J. (1993). Simulations of cockroach locomotion and escape. In Biological Neural Networks in Invertebrate Neuroethology and Robotics, ed. R. Beer et al. Boston: Academic Press. Beer, R.D., R.D. Quinn, H.J. Chiel, and R.E. Ritzmann (1997). Biologically inspired approaches to robotics. Communications of the ACM 40 (3): 3038. Blackmore, S.J., G. Brelstaff, K. Nelson, and T. Troscianko (1995). Is the richness of our visual world an illusion? Transsaccadic memory for complex scenes. Perception 24: 10751081. Brooks, R.A. (1986). A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, RA-2: 1423. Brooks, R.A. (1991a). Intelligence without representation. Artificial Intelligence Journal 47:

17

139160. Brooks, R.A. (1991b). Intelligence without reason. In Proceedings of the 12th International Joint Conference on Artificial Intelligence. San Francisco: Morgan Kaufmann, 569595. Chiel, H.J., and R.D. Beer (1997). The brain has a body: Adaptive behavior emerges from interactions of nervous system, body, and environment. Trends in Neuroscience 20: 553557. Clark, A. (1995). Moving Minds: Situating Content in the Service of Real-World Success. Philosophical Perspectives 9: 89104. Clark, A. (1997). Being There: Putting Brain, Body, and World Together Again. Cambridge, MA: MIT Press. Collett, T.S., and M.F. Land (1975). Visual control of flight behavior in the hoverfly, Syritta pipiens L. Journal of Comparative Physiology A 99: 166. Dennett, D.C. (1992). Filling in versus finding out: A ubiquitous confusion in cognitive science. In Cognition: Conceptual and Methodological Issues, ed. H.L. Pick, Jr., P. van den Broek, and D.C. Knill. Washington DC: American Psychological Association, 3349 Ferrell, C. (1995). Global behavior via cooperative local control. Autonomous Robots 2: 105125. Ghose, K., T.K. Horiuchi, P.S. Krishnaprasad, and C.F. Moss (2006). Echolocating Bats Use a Nearly Time-Optimal Strategy to Intercept Prey. PLoS Biology 4: e108. Glenberg, A.M. (1997). What memory is for. Behavioral and Brain Sciences 20: 155. Glenberg, A.M., and D.A. Robertson (2000). Symbol grounding and meaning: A comparison of high-dimensional and embodied theories of meaning. Journal of Memory and Language 43: 379401. Grimes, J. (1996). On the failure to detect changes in scenes across saccades. In

18

Perception (Vancouver Studies in Cognitive Science), Volume 2, ed. K Akins. New York: Oxford University Press, 89110. Haugeland, J. (1995). Mind Embodied and Embedded. Acta Philosophica Fennica 58: 233267. Reprinted in his Having Thought, Cambridge, MA: Harvard University Press, 207237. Kirsh, D. (1995). The intelligent use of space. Artificial Intelligence 73: 3168. Kirsh, D. and P. Maglio (1994). On distinguishing epistemic from pragmatic action. Cognitive Science 18: 513549. Lanchester, B.S., and R.F. Mark (1975). Pursuit and prediction in the tracking of moving food by a teleost fish (Acanthaluteres spilomelanurus). Journal of Experimental Biology 63: 627645. Mack, A., and I. Rock (1998). Inattentional Blindness. Cambridge, MA: MIT Press. Markman, A.B. and E. Dietrich (2000). In defense of representation. Cognitive Psychology 40: 138171. McBeath, M.K., D.M. Shaffer, and M.K. Kaiser (1995). How baseball outfielders determine where to run to catch fly balls. Science 268: 569573. Mitchell, M. (1999). Can Evolution Explain How the Mind Works? A Review of the Evolutionary Psychology Debates. Complexity 4.3: 1724. Mitroff, S.R., D.J. Simons, and D.T. Levin (2004). Nothing compares 2 views: Change blindness can occur despite preserved access to the changed information. Perception and Psychophysics 66: 12681281. Olberg, R.M., A.H. Worthington, and K.R. Venator (2000). Prey pursuit and interception in dragonflies. Journal of Comparative Physiology A 186: 155162. ORegan, J.K. (1992). Solving the real mysteries of visual perception: The world as an outside memory. Canadian Journal of Psychology 46: 461488. ORegan, J.K., H. Deubel, J.J. Clark, and R.A. Rensink (2000). Picture changes during

19

blinks: looking without seeing and seeing without looking. Visual Cognition 7: 191212. Pessoa, L., E. Thompson, and A. No (1998). Finding out about filling in: A guide to perceptual completion for visual science and the philosophy of perception. Behavioral and Brain Sciences 21: 723802. Pinker, S. (1997). How the Mind Works. New York: W.W. Norton. Port, R., and T. van Gelder (1995). Mind as Motion: Explorations in the Dynamics of Cognition. Cambridge, MA: MIT Press. Rensink, R.A. (2000). The dynamic representation of scenes. Visual Cognition 7: 1742. Rensink, R.A, J.K. ORegan, and J.J. Clark (1997). To see or not to see: The need for attention to perceive changes in scenes. Psychological Science 8: 368373. Rensink, R.A., J.K. ORegan, and J.J. Clark (2000). On the failure to detect changes in scenes across brief interruptions. Visual Cognition 7: 127146. Shaffer, D.M., S.M. Krauchunas, M. Eddy, and M.K. McBeath (2004). How dogs navigate to catch frisbees. Psychological Science 15: 437441. Simon, H. (1981). The Sciences of the Artificial, second edition. Cambridge, MA: MIT Press. Simons, D.J., and C.F. Chabris (1999). Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception 28: 10591074. Simons, D.J., and D.T. Levin (1997). Change blindness. Trends in Cognitive Science 1: 261267. Simons, D.J., and D.T. Levin (1998). Failure to detect changes to people in a real-world interaction. Psychonomic Bulletin and Review 5: 644649. Smith, L., and M. Gasser (2005). The development of embodied cognition: six lessons from babies. Artificial Life 11: 1329. Thelen, E. (1986). Treadmill-elicited stepping in seven-month-old infants. Child Development 57: 14981506.

20

Thelen, E. (1995). Motor Development: A New Synthesis. American Psychologist 50.2: 7995. Thelen, E. (2000). Grounded in the world: Developmental origins of the embodied mind. Infancy 1: 330. Thelen, E., G. Schner, C. Scheier, and L.B. Smith (2001). The dynamics of embodiment: A field theory of infant perseverative reaching. Behavioral and Brain Sciences 24: 186. Thelen, E., and L. Smith (1994). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, MA: MIT Press. Van Gelder, T. (1995). What might cognition be, if not computation? Journal of Philosophy 92: 345381. van Gelder, T. (1998). The dynamical hypothesis in cognitive science. Behavioral and Brain Sciences : 615628. Varela, F.J., E. Thompson, and E. Rosch (1991). The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press. Vera, A.H. and H.A. Simon (1993a). Situated action: A symbolic interpretation. Cognitive Science 17: 748. Vera, A.H. and H.A. Simon (1993b). Situated action: Reply to reviewers. Cognitive Science 17: 7786. Zelazo, P.R. (1984). The development of walking: New findings and old assumptions. Journal of Motor Behavior 15: 99137.

Das könnte Ihnen auch gefallen