Sie sind auf Seite 1von 28

Mechanisms, Optimality and Design

(Or: Why the Brain is in the Head?)


Sergio Barberis
University of Buenos Aires

Abstract
The new Mechanist philosophers hold that explanations in neuroscience describe mechanisms
and integrate multiple fields. The first assumption is usually articulated in terms of a strict
model-to-mechanism mapping requirement. The second assumption is articulated through the
idea of a mosaic unity of neuroscience that results from the integration of constraints from
multiple fields on the space of possible mechanisms for a target phenomenon. Together these
assumptions imply that a scientific model contributes to the explanation of a phenomenon to the
extent that it places causal/mechanical constraints on the space of possible mechanisms for that
phenomenon. I argue that this consequence is in tension with the distinctive contribution of
optimality modeling to explanations in neuroscience. I discuss the case study of neural wiring
optimization models in neuroanatomy and systems neuroscience. I argue that optimality
explanations in neuroscience, properly understood, are design explanations. Design
explanations explain why the parts, activities and organizational features of the target
mechanism are as they are on the basis of the utility of such mechanistic features given the
design constraints imposed by other features of the organism and the environment. The
distinctness of design explanations is in tension with the model-to-mechanism mapping
requirement. The functional dependencies that optimality models depict, however, surely shrink
the space of possible mechanisms for the relevant phenomena, contributing to the mosaic unity
of neuroscience.
Keywords: Mechanism, Optimality, Design Explanation, Neural Wiring Optimization, Mosaic
Unity of Neuroscience.

1. Introduction
In Explaining the Brain (2007), Carl Craver introduces three assumptions about
explanation in neuroscience: (i) explanations describe mechanisms; (ii) explanations
span multiple levels; and (iii) explanations integrate multiple fields. In this paper, I am
concerned with assumptions (i) and (iii) and their relations. Assumption (i) is usually
articulated through a model-to-mechanism mapping (3M) requirement (Kaplan 2011;
Kaplan and Craver 2011; Craver and Kaplan 2011). According to the 3M constraint, a
model explains a target phenomenon to the extent that elements in the model represent

1
relevant parts, activities, organizational features or causal relations in the mechanism for
that phenomenon. Assumption (iii) is articulated through the idea of a mosaic unity of
neuroscience (Craver 2007). The mosaic unity of neuroscience results from the
integration of constraints from multiple fields on the space of possible mechanisms for a
target phenomenon. The 3M requirement and the assumption of the mosaic unity
together imply that a scientific model contributes to the explanation of a phenomenon to
the extent that it places causal/mechanical constraints on the space of possible
mechanisms for that phenomenon. I argue that this consequence is in tension with the
distinctive contribution of optimality modeling to explanations in neuroscience.
Optimality models explain why a target system exhibits certain features without placing
any causal or mechanical constraint on the relevant space of possible mechanisms. The
solution I advocate for this tension is to abandon the 3M requirement on explanation.
In order to motivate this revision, I discuss the case study of neural wiring
optimization models in neuroanatomy and systems neuroscience (Cherniak et al. 2002,
Kaiser and Hilgetag 2015). These models offer optimality explanations of some general
features of neural systems. For example, the actual spatial placement of ganglia in C.
Elegans is explained by the fact that connections between these ganglia are expensive to
build and maintain, and that the actual placement is very close to the optimal layout for
those components in the worm, i.e. the layout that preserves the desired connectivity
between elements using the smaller amount of wire length (Cherniak 1994; Chen et al.
2006). A similar explanation can be envisioned for the placement of the core sensory
areas of the macaque visual cortex (Cherniak et al. 2004). Optimality explanations are
not causal/mechanical explanations: the constraints and tradeoffs that determine the
optimum of the model are not real parts, activities or organizational features of the
target mechanism. Furthermore, optimality explanations in neuroscience should not be
confused with other kinds of non-causal explanation in the biological sciences, such as
equilibrium explanations (Rice 2012, 2013) or program explanations (Lyon 2012). I
argue that optimality explanations in neuroscience, properly understood, are design
explanations (Wouters 1999, 2007). Design explanations explain why the parts, activities and
organizational features of the target mechanism are as they are on the basis of the utility of such
mechanistic features given the design constraints imposed by other features of the organism and
the environment. The distinctness of design explanations is in tension with the 3M
requirement. The functional dependencies that optimality models depict, however,

2
surely shrink the space of possible mechanisms for the relevant phenomena,
contributing to the mosaic unity of neuroscience.

2. Mechanism
Craver (2007, p. 1) claim that explanations in neuroscience (i) “describe
mechanisms”, (ii) “span multiple levels” and (iii) “integrate multiple fields.” How
should we articulate assumption (i), i.e. that explanations describe mechanisms? This
assumption has descriptive and normative aspects. As a descriptive claim, it is supposed
to accurately capture the explanatory practices of contemporary neuroscience.
Neuroscientists, cognitive scientists and neurobiologists usually conceive their own
work as the search for the mechanisms of normal human and animal behavior. As a
normative claim, assumption (i) aims to make the criteria that demarcate good
explanations from bad explicit. Explanations in neuroscience must describe
mechanisms. Notice that, in its normative aspect, assumption (i) is not a claim about the
representational format that successful explanations should take in neuroscience.
Scientific explanations may be conveyed by mechanism diagrams (Machamer, Darden
and Craver 2000), prototype vectors (Churchland 1989) or abstract mathematical
equations (Levy 2013), just to name a few representational possibilities. What good
explanations are crucially depends, instead, on what the objective structure of the world
actually is.1 In order to make the norms of explanation explicit, “one must look beyond
representational structures to the ontic structures in the world” (Craver 2015, p. 28).
Craver (2015) believes, in fact, that “if the philosophical topic of explanation is to
provide criteria of adequacy for scientific explanations, then the ontic conception is
indispensable”.
What is the ontic conception? Wesley Salmon’s “most penetrating insight”,
according to Craver (2007, p. 26), was to notice an ambiguity in the term,
“explanation”. In a sense, explanations are explanatory texts –scientific representations
of diverse formats– that function as vehicles for conveying information about an
explanandum phenomenon. Explanatory texts are the kind of entities that may be true or
false. As epistemic products, explanations are “complex representations operated upon
to generate knowledge and facilitate understanding” (Wright 2012, p. 376). In another
sense, however, explanations are worldly structures, objective features of the world.
1 Illari (2013) frames the same observation in this way: “Craver is not arguing about what an
explanation itself is, but arguing for the importance of ontic constraints in recognizing, finding
and possibly even using good explanations”.
3
Considered as ontic structures, explanations are not the kind of entities that may be
“good” or “bad”. They just are. The main trust of those philosophers who endorse the
ontic view is that explanatory texts are good explanations if, and only if, they exhibit
ontic explanations in the world. 2
That trust is usually complemented with the idea that “causality is intimately
involved in explanation” (Salmon 1984a, p. 296) and that the “explanation of an event
involves exhibiting that event as it is embedded in its causal network and/or displaying
its internal causal structure” (Salmon 1984a, p. 298). In this way, ontic explanations are
usually interpreted by mechanists as causal-mechanical patterns and regularities into
which events fit. It is in this sense that mechanists can take to be uncontroversial that
“the organized activities of a system’s components explain the activities of the whole”
(Piccinini and Craver 2011, p. 284, fn. 2) or that, in the neurotransmitter release
research program, “the explanandum is the release of one or more quanta of
neurotransmitters in the synaptic cleft” and “[t]he explanans is the mechanism linking
the influx of Ca2 into the axon terminal” (Craver 2007, p. 22). Mechanisms explain the
phenomena they are responsible for (Illari 2013). Consequently, explanatory texts in

2 Not all mechanist philosophers endorse the ontic conception of explanation. Bechtel (2008, p.
18) makes this point clear when he affirms that “[e]xplanation is fundamentally an epistemic
activity performed by scientists”. In the same vein, Wright (2012, p. 375) affirms that “[t]he
default sense of the infinitive to explain is a communicative one, pertaining to the transmission
of understanding” and that “explaining some phenomenon φ involves operating on internal
and/or external representations of φ to understand the how or the why.” For the epistemic view
of explanation, explanatory texts come first. To say that scientific explanations are mechanisms
or that mechanisms explain is merely to speak using metaphors of personification. To put it
differently, sentences like: “Mechanistic explanations involve mechanisms” are just explanatory
claims in which the representation- and model-talk has been ellipsed out of the construction.
Against the ontic view, Wright (2012) contends that “explanation” and other cognate terms are
straightforwardly unambiguous. Mechanisms explain only in a metaphorical and elliptical way.
Cell membranes, biochemical pathways, genetic cascades are simply “inapposite” candidates
for doing any explaining –just as they are inappropriate candidates for other activities, such as
watching a movie, or listening to the Beatles. Component parts and activities constitute the
mechanisms that are responsible for some explanandum phenomenon φ, but they do not explain
φ unless they are codified into scientific representations of some sort. I think Wright’s reasons
are irrelevant against the ontic conception. First, linguistic reasons based on current usage of a
word in natural language, such as Wright’s considerations, could have been useful back in 1984,
when Salmon’s linguistic deliberation took place. Now, thirty years later, it doesn’t matter
whether there is an ontic sense of “explanation” in ordinary language (Illari 2013). Craver
(2015, p. 31 fn. 2) notices that the German verb “erklären” is not ambiguous like the English
word “explanation”: it contains the idea of “making clear” which could be easily associated
with the epistemic sense of explanation. Salmon’s purported ambiguity of “explanation” may
not be replicable in languages other than English. That fact will not budge on Mechanism. The
relevant tenet of mechanistic explanation is that explanatory texts in neuroscience are
acceptable if, and only if, they capture (with any degree of abstraction and/or precision) actual
mechanisms in the world. Neither Bechtel nor Wright has argued against this insight.
4
neuroscience are good explanations to the extent that they exhibit the relevant causal
mechanisms that produce, realize or maintain an explanandum phenomenon, i.e., some
of the “knobs and levers that potentially afford control over whether and precisely how
the phenomenon manifests itself” (Kaplan and Craver 2011, p. 602).
This new Mechanist philosophy has few, if any, resemblances to the Mechanist
philosophy of the Scientific Revolution: “[w]hat remains of mechanistic explanation
from the days of Descartes and Boyle is this: that one explains a phenomenon by
showing how it is situated in the causal structure of the world” (Kaplan and Craver
2011, p. 606). Recently, Kaplan and Craver (2011) have articulated the assumption that
genuine explanations in neuroscience describe mechanisms in a strict model-to-
mechanism-mapping (3M) requirement, according to which a model in neuroscience
successfully explains a phenomenon to the extent that there is a plausible mapping
between (at least some) elements in the model and elements in the mechanism for that
phenomenon. The 3M constraint is designed to be “the mechanist’s gauntlet” (Kaplan
and Craver 2011, p. 602), i.e. a default assumption that the phenomena targeted by the
neurosciences have mechanistic explanations.

(3M) In neuroscience, a model of a target phenomenon explains that


phenomenon to the extent that (a) the variables in the model correspond to
identifiable components, activities, and organizational features of the target
mechanism that produces, maintains, or underlies the phenomenon, and (b) the
(perhaps mathematical) dependencies posited among these (perhaps
mathematical) variables in the model correspond to causal relations among the
components of the target mechanism. (See Kaplan 2011, p. 347; Kaplan and
Craver 2011, p. 611)

It is important to notice that the 3M constraint does not demand that every
component and every causal relation among the components of a target mechanism
should be incorporated into one complete, fully detailed mechanistic model for the
explanandum phenomenon. The 3M constraint does not imply that only the most
completely detailed models will be genuinely explanatory. Indeed, “the idea of an
ideally complete how-actually model, one that includes all of the relevant causes and
components in a given mechanism, no matter how remote, negligible, or tiny, without
abstraction or idealization, is a philosopher’s fiction” (Kaplan and Craver 2011, p. 610).

5
For the new Mechanist philosophy, explanation in neuroscience is local and
fragmentary, oscillating up and down in a hierarchy of mechanisms to find the casual
and/or constitutive aspects of that hierarchy that account for the explanandum
phenomenon. The 3M requirement is fully compatible with the co-existence of many
idealized, abstract or incomplete models of mechanisms contributing to the explanation
of a given phenomenon, inasmuch as those models are capable of depicting some causal
relations, properties or components of the mechanism that underlies the phenomenon.
Craver (2007) also claims that explanations in neuroscience integrate multiple
fields. According to this philosopher, “the unity of neuroscience is achieved as different
fields integrate their research by adding constraints on multilevel mechanistic
explanations” (see also Craver and Kaplan 2011; Piccinini and Craver 2011; Boone and
Piccinini 2015). Mechanistic model building proceeds by the accumulation of
constraints from different fields on the space of possible mechanisms (hereafter: SPM)
for a given phenomenon. The SPM contains all the mechanisms that could possibly
explain an explanandum phenomenon (Craver 2007, p. 247).3 Single how-possibly
explanations are represented by points in this space; classes of similar mechanisms are
regions. A constraint is a piece of evidence that shapes the boundaries of the SPM or
changes the probability distribution over that space (i.e. the probability that some region
of the space describes the actual mechanism). The scientific goal of unifying
neuroscience is realized by the accumulation of constraints from multiple fields on a
common mechanism. The products of different fields of neuroscience are used, like tiles
of a mosaic, to shape the SPM. Thus, mechanists accept that the modeling strategies
from different fields are autonomous to the extent that each one of these fields is
allowed to choose which phenomena to explain, which experimental designs to apply,
which conceptual resources to adopt, and the precise way in which they are constrained
by scientific evidence from adjacent fields (see Piccinini and Craver 2011). In fact, the
ability of scientific fields to provide novel constraints on a mechanism requires their
relative autonomy: “because different fields approach problems from different
perspectives, using different assumptions and techniques, the evidence they provide
makes mechanistic explanations robust” (Craver 2007, p. 231).4
3 The idea of the SPM is the ontic analogous to Railton’s epistemic notion of an “ideally
explanatory text”, conveying all the information relevant to the explanandum.
4 This kind of model pluralism is not mandatory for mechanists. A mechanist may hold that
explanatory texts are adequate if and only if they represent real mechanisms and maintain, at the
same time, that neuroscientific explanations bottom out in some privileged set of entities or
causal relations. I think that Bickle (2006)’s “ruthless reductionism” is a fundamentalist
6
If one accepts that explanations in neuroscience describe mechanisms (in the
specific sense articulated by the 3M constraint) and that explanations in neuroscience
integrate multiple fields (placing constraints on the SPM), then one has to conclude that
a scientific model in neuroscience contributes to the explanation of an explanandum
phenomenon if, and only if, it places causal/mechanical constraints on the SPM for that
phenomenon. In the next section, I will argue that this conclusion is in tension with the
contribution of optimality models to the explanation of certain properties of neural
systems. The tension between mechanism and optimality reasoning can be resolved if
we abandon the 3M requirement. Optimality models shrink the SPM for some
explanandum phenomena without placing any causal/mechanical constraint on it. The
scientific tradition of optimality reasoning is well represented in neural wiring
optimization models of neural systems in neuroanatomy and systems neuroscience.

3. Optimality
There is a scientific tradition of optimality explanation in neuroscience that has
not been correctly appraised by most philosophers of neuroscience. 5 At first sight, the
most noticeable feature of optimality models is the empirical application of
Optimization Theory, a mathematical technique that may be used besides biology and
neuroscience, e.g. in economics or engineering. This technique allows us to determine
what values of some control variable(s) will optimize the value of some design
variable(s) given a set of tradeoffs and constraints. An optimality model specifies a
strategy set (the set of possible strategies), a currency (the designs variables to be
optimized) and an optimization criterion (what it means to optimize the design
variables), in order to describe the objective function which connects each member of
the strategy set to values of the design variable(s) to be optimized. The optimal strategy
will be the one that optimizes the criterion in light of the constraints and tradeoffs.
Classical examples of optimality explanations in biology are David Lack (1947)
mechanist approach of this kind. According to Bickle, higher levels of modeling are merely
heuristic and the real explanation is at the cellular and molecular levels. Therefore, explanatory
patterns up in the mechanistic hierarchy would not be considered of any explanatory value. This
is a minority position, however.
5 A notable exception would be Chirimuuta (2014, p. 127) who mentions that efficient coding
explanations in computational neuroscience “bear interesting similarities with evolutionary and
optimality explanations elsewhere in biology” and represent a counter-example to the 3M
constraint. She endorses, however, a causal/mechanical reading of efficient coding explanations.
I argue that optimality models in neuroanatomy and systems neurosicence do not offer
causal/mechanical explanations, although they contribute to the mosaic unity of mechanistic
explanations.
7
explanation of the significance of clutch-size and Holling (1964) geometric analysis of
the foreleg of a praying mantis. In this section, I will first introduce some examples of
optimality reasoning in neuroscience and then, in the next section, I will offer a
philosophical assessment of their explanatory credentials.
An early example of optimality explanation in neuroscience is Barlow (1952)
study of the size of ommatidia in compound eyes. The eyes of arthropods are composed
of units called ommatidia, each pointing in a slightly different direction from its
adjacent units. Furthermore, each ommatidium sees only a spot of light in the direction
of its own axis. Consequently, Barlow’s model of the compound eye “consists of a
number of directionally sensitive elements with a small angle between each of them,
and together covering the required field of view” (Barlow 1952, p. 668). Now imagine
an insect looking at two bright objects. These two point sources will be seen as separate
only if they can each stimulate an ommatidium, leaving an unilluminated one in
between (Barlow 1952, p. 669). This means that the angle α between the point sources
must be at least twice the angle between the axes of adjacent ommatidia. Since the angle

between adjacent ommatidia is 2d D radians, where D is the diameter of the


hemisphere of the compound eye and d is the diameter of ommatidia, objects cannot be
distinguished unless.

(Eq. 1) a �4d D

According to Eq.1, the smaller d is, the finer the detail that eye can see. However, it is
commonly known that “light of wavelength λ passing through an aperture of diameter d

cannot form separate images of objects separated by an angle less than 1.2l d radians”
(McNeill Alexander 1996, p. 31). Therefore, an unstimulated ommatidium between two
stimulated ones cannot exclude the light from the point sources unless:

1
a �1.2l d
[Eq. 2] 2

Given these equations, we can calculate the optimum size of the eyes of bees, Apis
mellifera, which have a compound eye of diameter 2.4 mm for light of wavelength

8
0.5μm. The calculation is deployed in the Figure 1. The two lines show the lower limits
to α given by Eq. 1 and Eq. 2.

Figure 1. Graph of the angle α between objects that can be resolved by a compound eye of
diameter 2.4 mm against ommatidial diameter d. Modified from mcNeill Alexander (1996,
p.32).

Barlow (1952, p. 670) asks: “how large should the ommatidia be?” If they are
too small, each one will have a poor resolving power; if they are too big, the angle
between them must be large and the acuity of the eye will be decreased. The optimum
diameter for bee ommatidia, that which would maximize visual detail, is 27 μm, where
the lines in Fig. 1 intersect. Barlow (1952) found that the diameters of bee ommatidia
were actually smaller, 21 μm. However, if we plot ommatidial diameters for different
species of Hymenoptera of different sizes (from 1mm long to 60 mm long) against the
square root of the heights of the eyes, a significant pattern emerges. Figure 2 shows the
optimum values for a selection of bees and wasps.

9
Figure 2. Diameter of ommatidia in twenty-seven Hymenoptera. Modified from McNeill
Alexander (1996, p. 32).

The fact that all these insects have ommatidial diameters close to the optima for
their sizes of eye supports the suggestion that “the resolving power of the compound eye
is limited by the diameter of the ommatidium in relation to the wave-length of the
visible light”, which strongly suggests that “the wave structure of light is the limiting
factor in the design of the compound eye” (Barlow 1952, p. 672). In brief, Barlow’s
conclusion is that “[t]he rather theoretical approach which has been presented gives a
satisfactory explanation of the size of the ommatidia in eyes of different sizes” (1952, p.
673).
A more recent example of the tradition of optimality reasoning in neuroscience
is the explanation of the actual placement of components in diverse neural systems in
terms of neural wiring optimization (Cherniak et al. 2002, Kaiser and Hilgetag 2015).
While the transmission of chemical signals through the brain is fast and requires no
extra space or energy, the transmission of signals beyond 1 μm must go “by wire” and
is relatively expensive (Sterling and Laughlin 2015, p. 363). Therefore, as Cajal noticed
more than a century ago, it will not be surprising to find that “all of the various
conformations of the neuron and its components are simply morphological adaptations
governed by laws of conservation for time, space, and material” (Ramón y Cajal 1909;
edited for brevity of Sterling and Laughlin 2015). Ramón y Cajal investigated various
cases where neural design saves wire: axons branch at an acute angle rather than a right
angle, and they also tend to leave the parent neuron at the point nearest to their
destination, reducing several hundred kilometers of brain wire (Sterling and Laughlin

10
2015, p. 363). Ramón y Cajal’s conservation laws, however, did not take into
consideration the metabolic cost of brain wiring. Energy is consumed for establishing
neural connections and for propagating action potentials through these connections
(Laughlin et al. 1998; Laughlin and Sejnowski 2003; Kaiser 2007; Kaiser and Hilgetag
2015). Although the wiring cost may have multiple origins, it is a general fact that the
farther apart two neurons are, the more costly is the connection between them (Chen et
al. 2006). The wiring cost must grow with the distance between connected neurons
(Chklovskii 2004, p. 2067). Several research programs in neuroscience have taken the
hypothesis that neural systems are optimized to reduce wiring costs seriously.
If long-range connections in the brain are a highly constrained resource, is the
anatomy of the brain correspondingly optimized? Cherniak and colleagues hold that
combinatorial network optimization theory has developed the mathematical formalisms
required for problems of “saving wire” in a system (Cherniak 1994, 1995, 2009;
Cherniak et al. 2002; Cherniak et al. 2004). In particular, the problem of component
placement optimization (also known as the “quadratic assignment problem”) has been a
research focus for design of large-scale integrated circuits. The problem can be stated as
follows: “Given connections among a set of components, find the spatial layout of the
components that minimizes total connection costs” (Cherniak et al. 2004, p. 1081).
Consider an abstract problem in which components 1, 2 and 3 has to be placed in slots
A, B and C. In figure 3, (3A) represents one member of the set of layouts that exhibit
maximal wire length and (3B) represents one member of the set of layouts that exhibit
minimal wire length.

Figure 3. Component placement optimization. Taken from Cherniak et al (2004), p. 76.

A more concrete problem would be this: Why is the brain in the head? The
positioning of the entire brain in the body is a one-component placement problem. The

11
brain of vertebrates, and of most invertebrates, makes more anterior sensor-motor
connections than posterior ones. Consequently, given the positions of nerve receptors
and muscles, combinatorial network optimization theory predicts that the brain should
be placed as forward as possible, as in fact the case (Cherniak 1995, p. 523). The main
caveat with this scientific approach is that, with n components, the number of

alternative layouts is n ! , therefore many of the most important problems of the


optimization of networks are NP-time hard. However, Cherniak and colleagues have
applied this style of reasoning to the problem of the placement of ganglia in the
nematode Caenorhabditis elegans, for which a complete neuroanatomy is available
since the late seventies. The problem of ganglion-level optimization is treated as

involving 11 movable components, with 11! possible layouts (39916800 alternatives).


All these possible placements has been searched exhaustively following a brute-force
method and, surprisingly, it has been found that the actual placement of ganglia in C.
Elegans turns out to be the one with minimum wire length (Cherniak 1994, 1995).
Cherniak’s brute force method cannot be applied to individual neurons.
Chklovskii and his colleagues (Chklovskii 2004, Chen et al. 2006) have extended the
optimality approach to neural wiring and, using updated data sets and more powerful
computational tools, they have solved for the optimal layout of the entire nervous
system of the C. Elegans, considering 279 nonpharyngeal neurons. In a first
approximation, these researchers build an optimality model for the nervous system
layout, the “Dedicated-Wire Model”. It is characterized as a network of nodes (neuronal
tot
cell bodies) and wires (synapses), with fixed positions. The total wiring cost ( C ) is
int
expressed as the sum of an internal cost to connect neurons to each other ( C ) and an
ext
external cost ( C ) to attach neurons to the fixed structures (sensory endings and
muscles):

[Eq. 3] C = C + C
tot int ext

The cost of wiring the ith and jth neurons is assumed to be proportional to some
power, ς, of the distance between them (Chen et al. 2006, p. 4723). Thus, the total
internal wiring cost is:

12
1 V
C int =
2a
��A ij xi - x j
[Eq. 4] i j

A
where xi is the neuron position and ij is an element of the adjacency matrix A, which
represents the synapses between neurons i and j.
On the other hand, the external cost of wiring neurons to sensory organs k in

positions sk and muscles l at positions ml is given by the following equation:

V 1 V
C ext = ��Sik xi - sk + � � M il xi - ml
[Eq. 5] i k a i l

M
In Eq. 5, Sik is the number of synapses between neuron i and sensory organ k, and ij
represents the numbers of synapses between neuron i and muscle l (Chen et al. 2006, p.
4724).
Since the quadratic cost function can be minimized analytically, these authors
were able to find the optimal neuronal placement that minimizes the wiring cost
function defined by [Eq. 3-5]. If one plots the predicted positions of individual neurons
as a function of actual positions in C. elegans, most of the neurons in the model lie
along the diagonal of the plot, where actual position and predicted position coincide
with each other. This “reasonable agreement” between predicted an actual layout
indicates that the wiring-minimization approach gives “nontrivial results” and is a
“meaningful description” of the relationship between connectivity and neuronal layout
in C. Elegans (Chen et al. 2006, p. 4725). There are some deviations, however, between
positions predicted by the model and actual neuron positions, particularly when it comes
to internal (neuron-to-neuron) connectivity. Chklovskii and colleagues notice that the
“outlier” neurons have common morphological features: most of them are located in the
tale of the worm and have long processes that span more than a quarter of the worm
body (Chen et al.2006, p. 4726). These researchers conclude that it is reasonable to
think that neural networks are not solely optimized for neural wiring; constraints other
than the wiring cost (possibly functional requirements) may be explanatorily relevant
for the deviations.6

6 They introduce these new constraints on a revised model, the “Shared Wired Model,” that I
will not review here. See Chen et al. (2006).
13
Cherniak and colleagues have also applied the optimality approach to the
placement of core functional areas of cat cortex and macaque cerebral cortex (Cherniak
et al. 2004). The application of this style of reasoning to the mammalian cortex demands
some simplifications and approximations in the model. Given the impracticality of
measuring long-range wire length in the mammalian brain, these researchers have used
a network wire-minimizing heuristic –the adjacency rule– as an optimality measure.
According to the adjacency rule, if two components a and b are connected, then a and b
are adjacent, i.e. they are immediately contiguous topologically. Furthermore, given that
cortical connection and adjacency information of the mammalian cortex is not complete,
these authors introduced a size law as a working hypothesis: “If a set of connected
components is optimally placed, then the smaller a subset of the total layout, the less
optimal it will tend to be” (Cherniak et al. 2004, p. 1083). The rationale behind the size
law is that local subsystem optimality is usually sacrificed to obtain global optimality.
The most striking finding of this approach to the placement of components of the
mammalian cortex is that optimality improves exponentially with the size of subset
visual areas. When a subset of 20 areas was considered, only three layouts of a billion
sampled were cheaper than the actual one. For a 25-area subset, “a billion-layout
random sample yielded no placements cheaper than the actual one” (Cherniak et al.
2004, p. 1084), which suggests that mammals may have evolved “the best of all
possible brains” considering wiring costs (Cherniak et al. 2004, p. 1084).
Revisiting the structural connectivity and neuronal layout of C. Elegans and
macaques’ cerebral cortex, Kaiser and colleagues (Kaiser and Hilgetag 2006, 2015;
Chen et al. 2013) have argued that the reduction of path lengths appears at least as
important as the minimization of wire length. Long-distance connections are
metabolically expensive, but they have the benefit of reducing the number of
intermediate transmission steps in neural pathways (Kaiser and Hilgetag 2015, p. 3175).
The benefits on processing efficiency by adding long-distance projections might
outweigh the wiring costs of establishing those additional connections (Kaiser and
Hilgetag 2006). Recently, Chen et al. (2013) have explored an objective function
combining the wiring cost and processing efficiency constraints and they have
demonstrated that “several network features are related to the competition of these two
constraints,” in particular the formation of network modules (to save wiring) and hubs
(to reduce path lengths).

14
The case study of neural wiring optimization models in neuroanatomy and
systems neuroscience is an example of optimality reasoning in neuroscience. There are
examples of optimality reasoning in neuroscience other than wiring optimization models
in neuroanatomy, of course. For example, Harris and Wolpert (2013) propose that
saccade trajectories follow an stereotyped sequence because signal-dependent noise
imposes a compromise between the speed and the accuracy of an eye movement and
that the stereotyped sequence observed optimize a tradeoff between the accuracy and
duration of the movement. I think that this kind of optimality models contribute to the
explanation of some properties of neural systems. It could be argued that they place
constraints on the SPM: not every placement of ganglia in the nematode results equally
probable. However, the constraints that optimality models place on the SPM are not
causal/mechanical. The tradeoff between wiring costs and processing efficiency, for
example, is not a component part, activity or organizational feature of the target
mechanism in the mammalian cortex. That tradeoff is, in a sense, more “abstract” than
the causal or componential constraints placed by more mechanistic models. In the next
section, I will show why optimality models do not offer causal/mechanical explanations
and I will characterize the distinctness of optimality explanations in neuroscience.

4. Design
What are optimality models and how do they explain? These are thorny
questions that have been the focus of considerable interest in the recent philosophical
literature.7 In this section, I argue that optimality models offer what Wouters (1999,
2007) characterizes as “design explanations”. They are not causal but appeal to
functional dependence relations between the character of an organism’s trait or
behavior, on the one hand, and other organism’s traits and aspects of the environment,
on the other. Since the constraints and tradeoffs mathematically represented in the
model are not component parts, activities, organizational features of the relevant
mechanism, optimality models do not count as explanatory according to the 3M
constraint. However, design explanations place constraints on the space of possible
mechanisms for the target system, so they contribute to the explanation of that system
according to the mosaic unity of neuroscience.

7 Potochnik (2007), Rice (2012, 2013) and Irvine (2014) discuss optimality explanation in
biology. Lyon (2012), Saatsi (2012), Lange (2013), Baron (2013) and Pincock (2015), among
others, discuss the distinctively mathematical character of optimality explanations.
15
Optimality as Equilibrium Explanations
Rice (2012, 2013) offers one of the most influential analyses of optimality
modeling in evolutionary biology. He endorses the idea that optimality models have “a
permanent explanatory role” in biology (Rice 2012, p. 686). But how do optimality
models explain? Many philosophers of biology adopt what Rice calls a censored causal
model approach to optimality explanation. A censored causal explanation is “an
explanation that purposely omits certain causal factors in order to focus on a modular
part of the causal process that led to the explanandum” (ibid). In particular, optimality
models are interpreted as casual explanations that omit certain parts of the causal
process that produce an evolutionary outcome (i.e., the genetic causes) to emphasize
how natural selection shaped the trait in question. Endorsing the censored causal
approach, Potochnik (2007, p. 688) asserts that “an optimality model highlights a
modular part of the causal process that grounds certain phenotypic generalizations”.
Optimality models are genuinely explanatory in that they capture a particular subset of
the causal factors that lead to the explanandum phenomenon (typically, natural
selection).
In contrast, Rice considers that our conception of explanation in evolutionary
biology must move beyond the causal approach. Optimality models, he thinks, do not
represent a modular part of the causal evolutionary process. The reason is that, for Rice,
optimality models are a species of equilibrium explanation, particularly one in which
the equilibrium point of the system is provided by utilizing optimization theory. Sober
(1983, p. 202) claims that

(…) where casual explanation shows how the event to be explained was in fact
produced, equilibrium explanation shows how the event would have occurred regardless
of which of a variety of casual scenarios actually transpired.

On Rice’s (2012, p. 698) construal of optimality models, “[t]he optimal strategy is an


equilibrium point that will be arrived at regardless of the step-by-step dynamics of the
system”, so what optimality models ignores is not a subset of the causal process usually
included in more dynamical models, but rather a particular type of information, namely
“all information about the step-by-step dynamics of the evolving system”.
If optimality explanations in biology are not causal, how do they explain?
According to Rice, optimality explanations rely heavily, instead, on the synchronic

16
representation of mathematical non causal relationships between population-level
constraints and tradeoffs and the system’s equilibrium state. Tradeoffs are not causal
relations or processes between variables that unfold over time in the system. Tradeoffs
are non-causal counterfactual dependences between population-level variables (Rice
2012, p. 699). A model which adequately presented the actual initial conditions or the
actual causal trajectories of the target system would not convey the explanatory
information that the optimality model provides, that is, that

(…) the initial conditions and causal trajectory of the target system are not important for
understanding why the target explanandum occurred because several different causal
histories would have led to the same outcome. (Rice 2013, p. 10)

In short, optimality models aim to show how some constraints and tradeoffs guarantee
the evolution of an equilibrium state. Rice’s non causal analysis of optimality
explanation is incompatible with the 3M requirement. Were there any mechanists about
evolutionary explanation (in fact, there are a few; see Barros 2008; Fulda 2015), Rice’s
conception of optimality explanation would represent a robust alternative.
I agree with Rice in that the tradeoffs and constraints represented in optimality
models (both in evolutionary biology and neuroscience) are not causal components of
the target mechanism. Consider the Dedicated-Wire Model of the C. Elegans neural
system. The Dedicated-Wire Model represents the total wiring cost of the nervous
system as the sum of an internal cost to connect neurons to each other and an external
cost to attach neurons to sensory endings and muscles. The total wiring cost acts a
design variable on the placement of nonpharyngeal neurons in the worm. The wiring
cost is not, however, a component part, activity or organizational feature of the
mechanism for neuronal layout. As Rice (2012, p. 699) notices, the mathematical
representation of tradeoffs among the variables within the model is doing “the heavy
lifting” in most optimality explanations. From a wider perspective, tradeoffs are not
themselves the kind of entities that can enter into causal (or componential) relationships:
they are not events nor are they causal properties. Therefore, optimality models do not
satisfy the 3M requirement on explanation. They are explanatory, nonetheless.
The distinctness of optimality explanation cannot be captured, however, by the
concept of equilibrium explanation. Consider Sober (1983) example of equilibrium

17
explanation, i.e. Fisher’s (1931) explanation of the fact that the sex ratio is 1:1 in many
species. The insight of Fisher’s explanation is that

(…) if a population ever departs from equal numbers of males and females, there will be
a reproductive advantage favoring parental pairs that overproduce the minority sex. A
1:1 ratio will be the resulting equilibrium point. (Sober 1983, p. 201)

Sober claims that equilibrium explanations such as Fisher’s are made possible by
theories that describe the dynamics of systems in certain ways. In particular, Fisher’s
explanation is elaborated in the context of population genetics. Population genetics is
conceived as a dynamical theory that allows us to determine the impact of diverse
evolutionary forces in shaping the evolution of allele frequencies in a population. 8 Rice
(2012, p. 687) thinks that optimality explanations, just like Fisher’s explanation of sex
ratio, represents the evolution of a particular phenotype assuming that the “the strategy
that optimizes the criterion of the model is the equilibrium point of the evolving
population”. The optimization criterion of optimality models in neuroscience, however,
need not be a measured proxy of fitness, and it is not required to assume that the
constraints and tradeoffs of the model are involved in the natural selection of the trait.
Consider the optimality explanation of the placement of ganglia of C. Elegans.
According to Cherniak (1995), it is an open question whether the placement of
components in the worm’s brain is due to phylogenetic or ontogenetic processes (or
both). In fact, he advocates a “non-genomic” nativist interpretation of such wiring
optimization findings, in which instances of optimized neuroanatomy are derivable “for,
free, directly from physics”, i.e. purely from simple physical energy minimization
processes (Cherniak 2009, p. 108). Some philosophers and cognitive scientists even
think that Cherniak’s findings mean that Darwin has got something terribly wrong about
evolution and represent a return of the “Laws of Form”, i.e. constraints “from above”
that “exceed the boundaries of biology are necessarily quite abstract” (Fodor and Piatelli
Palmarini 2010, p. 72). In a similar vein, Chomsky (2005, p. 5) considers that
Cherniak’s findings has roots in the pre-Darwinian scientific tradition of “rational
morphology” and point to principles that are organism-independent. Furthermore, if
optimality explanations in neuroscience were representing the optimal strategy as the
equilibrium point of the evolving population, then the standard idealizations of
8 Hardy-Weinberg equilibrium would be the zero-force law of population genetics (Sober
1984). See Brandon and McShea (2010) for a dissenting opinion.
18
population genetics would be valid for optimality models in neuroscience too, e.g., that
the population size is infinite (to eliminate drift), that organisms mate randomly or that
strategies are inherited perfectly by offspring. None of the models reviewed in section 3
assumes these idealizations.9

Optimality as Program Explanations


Another influential analysis of the explanatory status of optimality models in
science is Lyon’s (2012) idea that most optimality explanations are a kind of program
explanation in which certain mathematical properties or entities play the programming
role (see also Lyon and Colyvan 2008, Baron 2013, Pincock 2015). Lyon claims that
optimality explanations belong to a distinct kind of non-causal mathematical
explanation of empirical facts –in particular, a kind of explanation in which some
mathematical properties or entities “program” certain empirical facts. Lyon’s preferred
example of optimality explanation is Hales’ explanation of the structure of bee
honeycombs. Bees build their honeycombs out of hexagonal cells. “Why the
honeycomb is always divided up into hexagons and not some other polygons (such as
triangles or squares), or any combination of different (concave or convex) polygons?”
(Lyon and Colyvan 2008, p. 499). The explanation relies on the fact that wax is an
expensive resource for bees and on the following mathematical fact, The Honeycomb
Theorem: “A hexagonal grid is the most efficient way to divide a Euclidean plane into
regions of equal area with least total perimeter” (Hales 2001, p. 4). 10A program
explanation is one that cites a property or entity that, although not causally efficacious,
somehow ensures the instantiation of some causally efficacious properties or entities
that actually produce the explanandum, just like a computer program “ensures that
certain things will happen –things satisfying certain conditions– though all the work of

9 Here is another way to see that optimality models in neuroscience do not offer equilibrium
explanations. In evolutionary biology, the optimization criterion is usually conceived as a proxy
of fitness (or inclusive fitness). However, there is also a reverse-engineering sense of optimality
in which a design is said to be optimal “if it complies with its functional requirements as well as
possible” (Vilarroya 2002, p. 251). Optimality models in neuroscience can be seen as
embodying this reverse-engineering sense of optimality (Dennett 1995), i.e. as attempting to
analyze “an already existing intelligent artifact or system in terms of the design considerations
that must have governed its creation” (Vilarroya 2002, p. 251).
10 Lange (2013, p. 500) thinks that this is an ordinary causal explanation, one that describes
“the relevant features of the selection pressures that have historically been felt by honeybees”.
We have seen that this censored causal approach is inapposite for optimality explanation: the
constraints and tradeoffs of optimality models do not represent causal relations or components
within the target mechanism, and an optimality model itself does not represent the step-by-step
dynamics of an evolving system.
19
producing those things goes on at a lower, mechanical level” (Jackson and Pettit (1990),
p. 114). Thus, fixed certain environmental conditions, the mathematical fact expressed
by The Honeycomb Theorem “programs” hexagonal honeycombs, no matter the actual
sequence of shapes that the bees tried out in their evolutionary history (Lyon 2012).
Similarly, one could argue that the mathematical tradeoff between wiring minimization
and the reduction of path lengths “programs” a neural network architecture of modules
and hubs, no matter the mechanisms that actually produce such outcome (Kaiser and
Hilgetag 2006). A distinctive feature of higher-level program properties is that they are
multiple realizable by a variety of lower-level causally efficacious mechanisms.
Attractive as it is, I think that Lyon’s idea is mistaken: optimality explanations (in
neuroscience) are not program explanations.
Saatsi (2012) have correctly articulated the most pressing problem of Lyon’s
proposal. On the one side, programming properties are clearly higher-level causal
properties. Although they are not the locus of causal efficiency, they must be causally
relevant for the production of the explanandum phenomenon, that is, they must
somehow ensure the instantiation of a relevant “realization basis” in connection with the
explanandum. On the other side, mathematical properties are abstract entities, and it is
not an easy task to see how they could necessitate any causal efficacious property at all.
“The crucial question”, Saatsi (2012, p. 582) claims, is this: “how could a mathematical
property ever thus necessitate the instantiation of this or that causally efficacious
property somehow ‘realizing’ it?” There seems to be no logical, metaphysical or natural
connection between mathematical properties and higher-order causal properties, so it is
obscure how mathematical properties could work as higher-level causally relevant
properties in a program explanation. The solution, I think, is to abandon the idea that
mathematics in optimality models is explanatory per se. Mathematics may be playing,
instead, a representational role in optimality explanations: it may be indispensable to
express or represent “non-mathematical facts which themselves do all the explanatory
heavy lifting” (Saatsi 2012, p. 579).
May the non-mathematical facts represented in optimality models program the
explanandum phenomenon anyway? In that case, optimality models would offer
program explanations without offering, at the same time, distinctively mathematical
program explanations. I think that the constraints and tradeoffs between design variables
that appear in optimality models are not programming properties. Consider one
paradigmatic example of program explanation, i.e. Putnam’s (1975) explanation of why

20
a square (one inch high) peg failed to pass through a round hole (one inch around) in a
board. One possible explanation of this situation could track the micro-constituents of
the peg and the hole. The explanation preferred by Putnam (1975), however, is one that
brings about “abstract” geometrical features of the situation, i.e. the relations between
the size and shape of the elements involved in the situation. However, the size and shape
of the peg and the hole are, of course, physical properties of those elements (Klein
2013). If the-peg-and-the-board were considered as a part of a mechanism, the size and
shape of its components would be spatial organizational features of the mechanism
(Craver 2007). In contrast, if the placement of ganglia in C. Elegans is considered as a
mechanism, the tradeoffs between design variables such as wire length and connectivity
would not be a spatial organizational feature of the mechanism. If fact, such tradeoffs
would explain why the system exhibits the spatial organization it does. They are, as it
were, “design ideals” that non-causally constraint the SPM in which the actual
mechanism is embedded.

Optimality as Design Explanations


I think that Wouters’ (1999, 2007) approach to optimality explanation as a
variety of design explanation gets us back on the right track. Design explanations are
usually brought up in answer to explanatory demands in functional biology, i.e. that part
of biology that is concerned with the way individual organisms are built (e.g. anatomy,
morphology), the way they work (e.g. physiology) and the way they behave (e.g.
ethology). The basic idea is that design explanations purport to explain “why certain
organisms have certain traits by showing that their actual design is better than
contrasting designs” (Wouters 2007, p. 65). The hallmark of design explanations is their
concern with the utility of a certain trait, often in comparison with merely hypothetical
alternatives. In the rest of this paper, I will show how the examples of optimality
modeling in neuroscience fit the basic structure of design explanations and why they
represent a contribution to the mosaic unity of neuroscience.
According to Wouters (1999, p. 222) design explanations are answers to
questions of the following form:

[Design Explanandum]
[Q] “Why do s-organisms have / perform t1 rather than t2, t3, tn?”

21
Where s is a set of organisms (that might be taxonomically heterogeneous), t1 is the trait
in question (i.e. the presence or character of a certain item or behavior) of s-organisms
and t2, t3, tn are the alternative traits. Design explanations are explicitly or implicitly
contrastive, they compare real organisms to hypothetical organisms that may not
possibly exist. Questions like [Q] are not questions that ask for causes at the level of
individual organisms. They are not questions that ask for evolutionary causes at the
level of the population neither. They are questions that ask instead for the utility of a
trait, in terms of what is needed or useful to stay alive, i.e. to maintain the organism, to
grow, to develop and to produce offspring. The core of an answer to a [Q]-question has
the following structure (Wouters 1999, p. 223):

[Design Explanans]
[C] s-organisms live in condition cu.
[U] In condition cu trait t1 is more useful than trait t2, t3, tn.

Where cu is a conjunction of one or more conditions of organisms and/or environment in


which organisms live. Statements like [C] specify conditions that apply to the relevant
organisms, and statements like [U] claim that, due to those conditions, the trait in
question is more useful to the s-organisms than the alternative traits. The utility claim
usually takes the form of a claim about the utility of performing a certain causal role 11 in
a certain way. An example of this kind of explanation in morphology is Schwenk’s
(1994) explanation of why snakes have a forked tongue (Wouters 1999, p. 229):

[Q] Why snakes have a forked tongue rather than a blunt one?
[C1] The tongue of snakes has a causal role in trail following.
[C2] Snakes follow trails by comparing chemical stimuli simultaneously sampled
at two sides.
[U] In order to sample chemical stimuli simultaneously at two sides it is more
useful to have a forked tongue than a blunt one.

Claims about utilities may vary in strength. Simple design explanations can be
classified either as optimality or as requirement explanations depending on whether the

11 Causal role or “function” in the sense that systemic theories of function give to the term
(Cummins 1975, 1983; Craver 2001; see Wouters 2003, 2005).
22
utility claim is an optimality claim or a requirement claim. A requirement claim has the
following form: “in condition cu trait t1 is the only one that is useful among the
following traits: t1, t2, t3, tn”. Requirements claims assert that the trait in question is the
only one in the reference class that works. Many design explanations derive
requirements directly from the laws of physics and chemistry. In contrast, an optimality
claim is relatively weaker and has the following form: “in condition cu trait t1 is more
useful than each of the following traits: t2, t2, t3, tn”. Optimality claims assert that the trait
in question is the best one in the reference class. Optimality explanations explain why
the optimality claim holds, they “point out that the organisms in question would have
certain disadvantages in the conditions in which they live if the trait in question were
replaced by an alternative” (Wouters 1999, p. 235). I think that these ideas can easily
accommodate the examples of optimality explanation in neuroscience reviewed in
section 3. Consider Cherniak’s (1995) explanation of why the human brain is in the
head12:

[Q] Why the human brain is in the head (rather than in other place of the body)?
[C1] Neural wiring is metabolically expensive.
[C2] The metabolic cost of neural wiring increases with distance.
[C3]The human brain makes more anterior than posterior sensor/motor
connections.13
[U] In condition C1-C3, placing the brain in the head (i.e. as far forward as
possible) is more useful than each of the alternative placements.

Mathematically, this explanation is represented as a one-component placement problem.


But the use of mathematical representations does not imply that this is a distinctively
mathematical explanation of empirical facts. In contrast, it is a design explanation that
connects a spatial organizational feature of a mechanism within an organism with
certain functional dependence relations between that mechanism, other parts of the
organism and environmental factors. A similar reconstruction is available for Cherniak’s
(1994) explanation of the placement of ganglia in C. Elegans:
12 This optimality explanation can be extended to the placement of the brain of vertebrates and
most invertebrates.
13 That is, “the number of nerve fibers leading to and from locations forward of the brain
exceeds the number of fibers leading to and from to the rear of the brain” (Cherniak 1995, p.
522).

23
[Q] Why ganglia are located in C. Elegans’ nervous system as they are?
[C1] Neural wiring is metabolically expensive.
[C2] The metabolic cost of neural wiring increases with distance.
[C3] The actual interconnections between ganglia are [such and such].
[U] In condition C1-C3, the worm’s actual layout requires the least total length of
connecting fiber of any of the millions possible layouts.14

To validate the optimality claim that the actual worm’s layout is the optimal
requires lots of mathematical reasoning and computational simulation, but those
calculations are not essential for the explanation. What is essential to the explanation is
that the model represents relevant functional dependence relations at the individual
level. In this case, the model picks out the relation between the actual layout of ganglia
in the worm, the connectivity of those elements and the metabolic constraints on wire
length.
These two examples suffice to show that optimality reasoning in neuroscience
fits the schema of design explanation. While a causal/mechanical explanation would
explain how a certain capacity is brought about by specifying an underlying mechanism
(or how a higher-level property would program the underlying mechanism), a design
explanation would “point out why that biological role is performed the way it is, rather
than in some other conceivable way” (Wouters 2007, p. 67). Design explanations are not
causal/mechanical explanations (i.e. they do not satisfy the 3M requirement): the
functional dependence relations they exhibit are not mechanistically mediated causes of
the trait in question, but relate the trait in question to other traits of the organism and the
state of the environment “in terms of what is useful for the organism to have” (Wouters
1999, p. 238). One should not think that it means that design explanations are not
objective, or that they “lead outside the multilevel mechanistic framework” (Piccinini
and Boone 2015). Of course, the stability of a mechanism critically depends on the
spatial, temporal and active organization of their concrete parts and activities. Design
explanations (optimality explanations included) explain why the target mechanism’s
parts and activities are organized as they are on the basis of the utility of such
mechanistic features given the design constraints imposed by other features of the
organism and the environment. The kind of relations of functional dependence that

14 See Cherniak at al. (2002), p. 74.

24
optimality models pick out determine what constructs are stable and, inasmuch as these
design constraints shrink the SPM of the systems targeted for explanation, they
contribute to the mosaic unity of neuroscience.

5. Conclusions
The new Mechanists hold that explanations in neuroscience describe
mechanisms and integrate multiple fields. The first idea is articulated in terms of the 3M
requirement on explanation. The second idea is articulated in terms of the mosaic unity
of neuroscience. Together they imply that a model contributes to the mosaic unity of
neuroscience to the extent that it set causal/mechanical constraints on the SPM for the
target system. I have reviewed an influential tradition of optimality explanation in
neuroscience, i.e. neural wiring optimization models in neuroanatomy. These models do
not offer causal explanations of their explananda. The particular non causal explanation
optimality models offer cannot be understood in terms of equilibrium explanations or
program explanations. Optimality models can be easily interpreted, however, as design
explanations. Design explanations explain why an organism has a particular trait or
behavior by reference to the utility of that trait or behavior in the light of objective
functional interdependencies between such a trait or behavior, other traits or behaviors
of the organism and environmental factors. These interdependencies are not causal but
they set non causal constraints on the SPM of the relevant mechanism. Therefore, the
distinctness of optimality explanation is compatible with the mosaic unity of
neuroscience. The 3M constraint, however, should be abandoned.

References
Barlow, H. B. (1952) “The size of ommatidia in apposition eyes”, Journal of
Experimental Biology, 29, pp. 667-674.
Baron, S. (2013) “Optimisation and mathematical explanation: doing the Lévy Walk”,
Synthese, 3(3), pp. 1-21.
Barros, B. (2008) “Natural selection as a mechanism”, Philosophy of Science, 75(3), pp.
306-322
Bechtel, W. (2008) Mental Mechanisms: Philosophical perspective on cognitive
neuroscience, New York: Routledge.
Bickle, J. (2006) “Reducing mind to molecular pathways: Explicating the reductionism
implicit in current cellular and molecular neuroscience”, Synthese, 151, pp. 411-434.
Boone, W. and Piccinini, G. (2015) “The cognitive neuroscience revolution”, Synthese,
online first.
Brandon, R. and McShea, D. (2010) Biology’s First Law, Chicago, University of
Chicago Press.

25
Ramón y Cajal, S. (1909) Histology of the Nervous System of Man and Vertebrates,
Oxford, Oxford University Press.
Chen, B., Hall, D., Chklovskii, D. (2006) “Wiring optimization can relate neuronal
structure and function”, Proceedings of the National Academy of Sciences, 103(12),
pp. 4723-4728.
Chen Y., Wang, S., Hilgetag, CC., Zhou, C. (2013) “Trade-off between multiple
constraints enables simultaneous formation of modules and hubs in neural systems”,
PLoS Computational Biology 9:e1002937.
Cherniak, C. (1994) “Component placement optimization in the brain”, Journal of
Neuroscience, 14, pp. 2418-2427.
Cherniak, C. (1995) “Neural component placement”, Trends in Neuroscience, 18, 522-
527.
Cherniak, C. (2009) “Neuroanatomy and cosmology” in J. Bickle (ed.) Oxford
Handbook of Philosophy and Neuroscience, Oxford, Oxford University Press, pp.
370-378.
Cherniak, C., Mokhtarzada, Z. and Nodelman, U. (2002) “Optimal-wiring models of
neuroanatomy”, in G. Ascoli (ed.) Computational Neuroanatomy: Principles and
methods, Totowa, Humana Press.
Cherniak, C., Mokhtarzada, Z., Rodríguez-Esteban, R., Changizi, K. (2004) “Global
optimization of cerebral cortex layout”, Proceedings of the National Academy of
Sciences, 101(4), pp. 1081-1086.
Chirimuuta, M. (2014) “Minimal Models and Canonical Neural Computations: The
Distinctness of Computational Explanation in Neuroscience”, Synthese, 191, pp. 127-
153.
Churchland, P. M. (1989) A neurocomputational perspective: The nature of mind and
the structure of science, Cambridge: MIT Press.
Chklovskii, D. (2004) “Exact Solution for the Optimal Neuronal Layout Problem”,
Neural Computation, 16, pp. 2067-2078.
Chomsky, N. (2005) “Three factors in language design”, Linguistic Inquiry, 36, pp. 1-
22.
Craver, C. (2001) “Role functions, mechanisms and hierarchy”, Philosophy of Science,
68(1), pp. 53-74.
Craver, C. (2007) Explaining the Brain: Mechanisms and the mosaic unity of
neuroscience, Oxford: Clarendon.
Craver, C. (2015) “The ontic account of scientific explanation”, in M. Kaiser et al
(2015) Explanation in the Special Sciences: The case of biology and history,
Dordretch: Springer.
Craver, C. and Kaplan, D. (2011) “Towards a Mechanistic Philosophy of Neuroscience:
A Mechanistic Approach” in P. French and J. Saatsi (eds.) Introduction to the
Philosophy of Science, Continuum Publishing, pp. 268-292.
Cummins , R. (1975) “Functional Analysis”, The Journal of Philosophy, 72(20), pp.
741-765.
Cummins, R. (1983) The Nature of Psychological Explanation, Cambridge, MIT Press.
Fisher, R. (1931) The Genetical Theory of Natural Selection, New York, Dover.
Fodor, J. and Piattelli-Palmarini, M. (2010) What Darwin got Wrong, Farrar, Straus and
Giroux.
Fulda, F. (2015) “A mechanistic framework for Darwinism or why Fodor's objection
fails” 192(1), Synthese, pp 163-183.
Glennan, S. (2002) “Rethinking mechanistic explanation”, Philosophy of Science, 69,
pp. S342-S353.

26
Glennan, S. (2010) “Mechanisms, causes, and the layered model of the world”,
Philosophy and Phenomenological Research, 81, pp. 362-381.
Hales, T. (2001) “The Honeycomb Conjecture”, Discrete and Computational Geometry,
25 (1), pp. 1–22.
Harris, C. and Wolpert, D. (2013) “The main sequence of saccades optimizes speed-
accuracy trade-off”, Biological Cybernetics, 95(1), pp. 25-29.
Holling, C. S. (1964) “The Analysis of Complex Population Processes”, The Canadian
Entomologist, 96, pp. 335-347.
Illari, P. (2013) “Mechanistic explanation: Integrating the Ontic and Epistemic”,
Erkenntnis, 78(2), pp. 237-255.
Irvine, E. (2014) “Models, robustness and non-causal explanation: a foray into cognitive
science and biology”, Synthese, online first.
Jackson, F. and Pettit, P. (1990) “Program explanation: a general perspective”, Analysis,
50(2), pp. 107-117.
Kaiser, M. (2007) “Brain architecture: a design for natural computation”, Philosophical
Transactions of the Royal Society, 365, pp. 3033-3045.
Kaiser, M. and Hilgetag. C. (2006) “Nonoptimal component placement, but short
processing paths, due to long-distance projections in neural systems”, PLoS
Computational Biology 2:e95.
Kaiser, M. and Hilgetag, C. (2015) “Wiring Principles, Optimization”, in D. Jaeger and
R. Jung (eds.) Encyclopedia of Computational Neuroscience, New York, Springer,
pp. 3172-3177.
Kaplan, D. (2011) “Explanation and Description in Computational Neuroscience”,
Synthese, 183(3), pp. 339-373.
Kaplan, D. and Craver, C. (2011) “The Explanatory Force of Dynamical and
Mathematical Models in Neuroscience: A Mechanistic Perspective”, Philosophy of
Science, 78(4), pp. 601-627.
Klein, C. (2013) “Multiple Realizability and the Semantic View of Theories”,
Philosophical Studies 163(3), pp. 683-695.
Lack, D. (1947) “The significance of clutch size”, Ibis, 89, pp. 302-352.
Lange, M. (2013) “What makes a scientific explanation distinctively mathematical?”,
British Journal for the Philosophy of Science, 64, pp. 485-511.
Laughlin, S. and Sejnowki, T. (2003) “Communication in neuronal networks”, Science,
301, pp. 1870-1874.
Laughlin, S., de Ruyter Van Steveninck, R., Anderson, J. (1998) “The metabolic cost of
neural information”, Nature Neuroscience, 1, pp. 36-41.
Levy, A. (2013) “What was Hodgkin and Huxley’s Achievement”, British Journal for
the Philosophy of Science, 65, 469-492.
Lyon, A. (2012) “Mathematical explanations of empirical facts, and mathematical
realism”, Australasian Journal of Philosophy, 90(3), pp. 559-578.
Lyon, A. and Colyvan, M. (2008) “The explanatory power of phase spaces”,
Philosophia Mathematica, 16(2), pp. 227-243.
Machamer, P., Darden, L., Craver, C. (2000) “Thinking about mechanisms”, Philosophy
of Science, 67(1), pp. 1-25.
McNeill Alexander, R. (1996) Optima for Animals, Princeton, Princeton University
Press.
Piccinini, G. (2007) “Computing Mechanisms”, Philosophy of Science, 74, pp. 501-526.
Piccinini, G. and Craver, C. (2011) “Integrating psychology and neuroscience:
Functional analyses as mechanism sketches”, Synthese, 183(3), pp. 283-311.

27
Pincock, C. (2015) “Abstract Explanations in Science”, The British Journal for the
Philosophy of Science, online first.
Potochnik, A, (2007) “Optimality modeling and explanatory generality”, Philosophy of
Science, 74(5), pp. 680-691.
Rice, C. (2012). “Optimality Explanations: A Plea for an Alternative Approach”,
Biology and Philosophy, 27(5), pp. 685-703.
Rice, C. (2013) “Moving beyond causes: Optimality models and scientific explanation”,
Noûs, 49(3), pp. 589-615.
Salmon, W. (1984) Scientific explanation and the causal structure of the world,
Princeton: Princeton University Press.
Sober, E. (1983) “Equilibrium explanation”, Philosophical Studies, 43(2), pp. 201-210.
Putnam, H. (1975) “Philosophy and our mental life”, in Mind, Language, and Reality,
London, Cambridge University Press, pp. 291-303.
Saatsi, J. (2012) “Mathematics and Program Explanations”, Australasian Journal of
Philosophy, 90(3), pp. 579-584.
Schwenk, K. (1994) “Why snakes have forked tongues”, Science, 263, pp. 1573-1577.
Sober, E. (1984) The Nature of Selection, Chicago, University of Chicago Press.
Sterling, P. and Laughin, S. (2015) Principles of Neural Design, Cambridge, MIT Press.
Vilarroya, O. (2002) “Two’ Many Optimalities”, Biology and Philosophy, 17, pp. 251-
270.
Wouters, A. (1999) Explanation Without A Cause, Ph.D. thesis, Utrecht University.
Wouters, A. (2003) “Four Notions of Biological Function”, Studies in History and
Philosophy of Biology and Biomedical Science, 34(4), pp. 633-668.
Wouters, A. (2005) “The Function Debate in Philosophy”, Acta Biotheoretica, 53(2), pp.
123-151.
Wouters, A. (2007) “Design Explanation: determining the constraints on what can be
alive”, Erkenntnis, 67(1), pp. 65-80.
Wright, C. (2012) “Mechanistic explanation without the ontic conception”, European
Journal of Philosophy of Science, 2(3), pp. 375-394.

28

Das könnte Ihnen auch gefallen