Sie sind auf Seite 1von 240

OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

Transparency and Self-Knowledge


OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

Transparency and
Self-Knowledge

Alex Byrne

1
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

3
Great Clarendon Street, Oxford, OX2 6DP,
United Kingdom
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide. Oxford is a registered trade mark of
Oxford University Press in the UK and in certain other countries
© Alex Byrne 2018
The moral rights of the author have been asserted
First Edition published in 2018
Impression: 1
All rights reserved. No part of this publication may be reproduced, stored in
a retrieval system, or transmitted, in any form or by any means, without the
prior permission in writing of Oxford University Press, or as expressly permitted
by law, by licence or under terms agreed with the appropriate reprographics
rights organization. Enquiries concerning reproduction outside the scope of the
above should be sent to the Rights Department, Oxford University Press, at the
address above
You must not circulate this work in any other form
and you must impose this same condition on any acquirer
Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America
British Library Cataloguing in Publication Data
Data available
Library of Congress Control Number: 2017955477
ISBN 978–0–19–882161–8
Printed and bound by
CPI Group (UK) Ltd, Croydon, CR0 4YY
Links to third party websites are provided by Oxford in good faith and
for information only. Oxford disclaims any responsibility for the materials
contained in any third party website referenced in this work.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

For C and G
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

Preface

This book sets out and defends a theory of how one knows about one’s mental
life—a theory of self-knowledge, as philosophers use that term. The basic idea is
that one comes to know that one is in a mental state M by an inference from a
certain worldly or environmental premise to the conclusion that one is in
M. (Typically the worldly premise will not be about anything mental.) Mental
states are in this sense transparent: self-knowledge is achieved by attending to a
corresponding tract of the world, not by taking an inward glance at one’s own
mind. Although written primarily for a philosophical audience, it is as much
an exercise in theoretical psychology as in philosophy. After an introduction in
Chapter 1, rival approaches to self-knowledge are critically examined in Chapters
2 and 3. Any kind of transparency approach faces a seemingly intractable obstacle,
the puzzle of transparency, which is the topic of Chapter 4. The positive account
gets going in Chapter 5, and is extended to a wide variety of mental states in the
remaining three chapters. Readers familiar with the issues may wish to start with
the second half, but the book has been designed as a long continuous argument.
Some of what follows has been reworked from previous papers: “Introspec-
tion,” Philosophical Topics 33 (2005); “Perception, recollection, imagination,”
Philosophical Studies 148 (2010); “Knowing that I am thinking,” Self-Knowledge,
ed. A. Hatzimoysis, Oxford University Press (2011); “Knowing what I want,”
Consciousness and the Self: New Essays, ed. J. Liu and J. Perry, Cambridge University
Press (2011); “Transparency, belief, intention,” Proceedings of the Aristotelian
Society Supplementary Volume (2011); “Review essay of Dorit Bar-On’s Speaking
My Mind,” Philosophy and Phenomenological Research 83 (2011); and “Knowing
what I see,” Introspection and Consciousness, ed. D. Smithies and D. Stoljar, Oxford
University Press (2012). I thank the publishers for permission to use this material.
In this book, single quotation marks (and sometimes italics) are used to
mention expressions (‘MIT’ contains three letters); double quotation marks are
used for quoted material and as “scare” quotes.
Philosophy is a collaborative enterprise, and many friends and colleagues
contributed to the theory set out here, mostly by vigorously objecting to it with
scant regard for my feelings. They are to blame for the absurd length of time it has
taken me to finish this book. Still, as Bob Stalnaker is fond of saying (quoting Iris
Murdoch), in philosophy if you aren’t moving at a snail’s pace you aren’t moving
at all. In random order, thanks are due to Louise Antony, Ralph Wedgwood,
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

viii PREFACE

Dick Moran, Josh Dever, Frank Hofmann, Chris Hill, Bertil Strömberg, Declan
Smithies, Kati Farkas, Ole Koksvik, JeeLoo Liu, Frank Jackson, Heather Logue,
Lloyd Humberstone, Sally Haslanger, Hilary Kornblith, Nicholas Asher, Andrew
McGonigal, Jonathan Schaffer, André Gallois, John Broome, Mohan Matthen,
Ned Hall, Brendan Balcerak-Jackson, Susanna Schellenberg, John Sutton, Judith
Thomson, Mark Sainsbury, Eric Schwitzgebel, John Hawthorne, Brit Brogaard,
Caspar Hare, Sarah McGrath, Richard Holton, Julia Markovits, Steve Yablo, Rae
Langton, Bob Stalnaker, Nico Silins, Becko Copenhaver, David Sosa, David
Hilbert, Jay Shaw, Cian Dorr, Susanna Siegel, Jennifer Nagel, Mark Johnston,
Jim van Cleve, David Chalmers, Sylvain Bromberger, Benj Hellie, Mike Roche,
Alvin Goldman, Tim Crane, Martin Davies, Russ Hurlburt, Peter Pagin, Michael
Tye, Fiona Macpherson, Ed Minar, Ned Block, Julia Markovits, Pär Sundström,
Nick Shea, Martine Nida-Rümelin, Houston Smit, Daniel Stoljar, Michael Brat-
man, Magdalena Balcerak-Jackson, Tom Kelly, Ed Mares, Thomas Nagel, David
Eng, François Recanati, Tim Williamson, Josh Schecter, Amy Kind, Sandra
Woloshuck, Fred Dretske, Jeremy Goodman, Adam Leite, Brie Gertler, Chris
Peacocke, John Schwenkler, Kieran Setiya, Sarah Paul, and Lauren Ashwell.
I admit to having a strong feeling of knowing that this list is incomplete; my
thanks and apologies to those omitted. I am also grateful to the National
Endowment for the Humanities, the Australian National University, the Univer-
sity of Konstanz, and (in particular) MIT for invaluable research support.
Three insightful and conscientious readers for OUP—one of whom turned out
to be Brie Gertler—effected many improvements to the penultimate manuscript.
I am grateful to them, to Sally Evans-Darby for copyediting, to Ari Koslow for
preparing the index, and (of course) to Peter Momtchiloff. My brother Felix,
whose breadth of talent never ceases to amaze, supplied the cover art.
I received a different sort of help, essential to the existence of the final product,
from many dedicated and caring professionals at the Massachusetts General
Hospital, Brigham and Women’s Hospital, and the Dana-Farber Cancer Insti-
tute. I am especially indebted to Matt Kulke, Keith Lillemoe, Peter Mueller, and
Ralph Weissleder.
Finally, we get to the obligatory part where the author’s family are singled out
for their love and forbearance. Only this time, I really mean it.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

Contents

1. Problems of Self-Knowledge 1
1.1 Self-knowledge 1
1.2 Transparency 2
1.3 Privileged and peculiar access 4
1.3.1 McKinsey and Ryle 4
1.3.2 Privileged access 5
1.3.3 Peculiar access 8
1.3.4 The independence of privileged and peculiar access 9
1.3.5 Peculiar access and McKinsey’s puzzle 10
1.3.6 Empirical work 11
1.4 Economy, inference, detection, unification 14
1.5 Self-knowledge as a philosophical problem 16
1.6 Preview 22
2. Inner Sense 24
2.1 Introduction 24
2.2 Against inner sense 26
2.2.1 The object perception model and the broad perceptual model 26
2.2.2 Objection 1: inner sense can’t detect extrinsic properties
(Boghossian) 29
2.2.3 Objection 2: inner sense is like clairvoyance (Cassam) 31
2.2.4 Objection 3: inner sense is incompatible with infallibility 33
2.2.5 Objection 4: inner sense is incompatible with self-intimation 37
2.2.6 Objection 5: inner sense leads to alienated self-knowledge (Moran) 38
2.2.7 Objection 6: inner sense cannot explain first-person authority
(Finkelstein) 40
2.2.8 Objection 7: the deliverances of inner sense are not baseless
(McDowell) 42
2.2.9 Objection 8: inner sense implies possibility of self-blindness
(Shoemaker) 43
2.3 Residual puzzles for inner sense 48
3. Some Recent Approaches 50
3.1 Introduction 50
3.2 Davidson on first-person authority 50
3.3 Moran on self-constitution and rational agency 57
3.4 Bar-On’s neo-expressivism 62
3.4.1 Simple expressivism 63
3.4.2 Two questions, one answer 64
3.4.3 Immunity to error through misidentification and misascription 66
3.4.4 Neo-expressivism and the asymmetric presumption of truth 70
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

x CONTENTS

4. The Puzzle of Transparency 74


4.1 Introduction 74
4.2 Gallois on the puzzle 77
4.3 Moran on the puzzle 79
4.4 Dretske on the puzzle 83
4.5 The puzzle of transparency for sensations 87
4.6 Kripke’s Wittgenstein on other minds 93
4.7 Hume on the self 94
4.8 Introspective psychology 96
5. Belief 99
5.1 Introduction 99
5.2 The puzzle of transparency revisited 99
5.2.1 Epistemic rules and BEL 100
5.2.2 Evans 103
5.2.3 First variant: reliability 103
5.2.4 Second variant: inadequate evidence 105
5.2.5 Third variant: reasoning through a false step 106
5.3 Peculiar and privileged access explained 108
5.3.1 Peculiar access 108
5.3.2 Privileged access 109
5.4 Economy and detection 112
5.5 Extensions 116
5.5.1 Knowing that one knows 116
5.5.2 Knowing that one does not believe 117
5.5.3 Knowing that one confidently believes 119
5.6 Objections 121
5.6.1 The inference is mad (Boyle) 122
5.6.2 There is no inference (Bar-On) 124
5.6.3 The account conflates do believe and should believe (Bar-On) 125
5.6.4 The account fails when one lacks a belief (Gertler) 126
6. Perception and Sensation 128
6.1 Introduction 128
6.2 Perception 129
6.2.1 The amodal problem 130
6.2.2 Alternatives to transparency 131
6.2.3 Option 1: non-observational knowledge 132
6.2.4 Option 2, first pass: visual sensations 134
6.2.5 Option 2, second pass: visual experiences of an F 135
6.2.6 Back to transparency: SEE 138
6.2.7 The memory objection 140
6.2.8 Evans again, and the known-illusion problem 142
6.2.9 Evans’ proposal 142
6.2.10 Belief-independence 143
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

CONTENTS XI

6.3 Sensation 147


6.3.1 Pain perception 147
6.3.2 PAIN and the world of pain 149
6.3.3 Perceptual theorists on the objects of pain perception 151
6.3.4 Back to naiveté 153
7. Desire, Intention, and Emotion 156
7.1 Introduction 156
7.2 The case for uniformity 157
7.3 Desire and DES 158
7.3.1 Circularity 162
7.3.2 Defeasibility 164
7.3.3 Connections 167
7.4 Intention and INT 167
7.4.1 Overgeneration problems 170
7.5 Emotion 172
7.5.1 Disgust and the disgusting 173
7.5.2 DIS and transparency 176
7.5.3 Circularity 178
7.6 Summary: privileged and peculiar access, economy and detectivism 181
8. Memory, Imagination, and Thought 183
8.1 Introduction 183
8.2 Memory 184
8.2.1 The visual world and the visualized world 185
8.2.2 Episodic recollection and transparency 189
8.2.3 Knowing that I am recollecting, first pass 189
8.2.4 First problem: putting the past into the antecedent 191
8.2.5 Second and third problems: belief in images, but not ducks 193
8.2.6 Second pass: MEM-DUCK 194
8.3 Imagination and IMAG-DUCK 195
8.4 Thought 198
8.4.1 Outer and inner speech, and THINK 199
8.4.2 Privileged and peculiar access 202
8.4.3 Extensions: pictorial and propositional thinking 204
8.4.4 Inner speech and imagined speech 205
8.4.5 Unsymbolized thinking and imageless thought 207
8.5 Finis 208

Bibliography 209
Index 223
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

1
Problems of Self-Knowledge

How do I know my own mental acts? How do I know what I just decided;
how do I know what I believe, what I suspect, what I intend to do? These are
one and all silly questions.
Vendler, Res Cogitans

1.1 Self-knowledge
This book is about knowledge of one’s mental states; self-knowledge, as it is called
in the philosophical literature.1 (The phrase means something quite different in
the self-help literature.2) Since knowledge of one’s weight or height is not “self-
knowledge” in the intended sense, the phrase has a Cartesian flavor. It suggests
that one’s “self” is a non-physical entity, to be distinguished from a certain
human animal, 6’ 2” tall and weighing 170 pounds. And as will be soon apparent,
from the anti-Cartesian perspective of this book, the phrase is singularly inapt.
However, since the terminology is entrenched, it will not be discarded.
As is common, the focus here will be on knowledge of one’s present mental
states. Examples include: knowing that I believe it’s raining, that I want a beer,
that I intend to go for a walk, that I feel itchy, that I am imagining a purple
kangaroo, that I feel afraid, that I am thinking about water. Other examples
include: knowing that I know that it’s raining; that I remember being at the pub;
that I see a kangaroo; that this paper looks white to me. These last are somewhat

1
A rough equivalent in the psychological literature is ‘metacognition’ (see, e.g., Dunlosky and
Metcalfe 2009: 2–3); ‘self-knowledge’ is typically used more broadly (see, e.g., Vazire and Wilson
2012).
2
Namely, knowledge of who one “really is”: one’s personality, “deepest” desires and fears, etc. The
inscription ‘Know Thyself ’ on the ancient Greek Temple of Apollo at Delphi may have been the
start of the self-help movement. Relatedly, in psychology ‘self-knowledge’ is sometimes used for
knowledge (or belief) about one’s traits; see, e.g., Kunda 2002: ch. 10. This book’s comparative neglect
of what Cassam 2014 calls ‘substantive self-knowledge’ puts it squarely within contemporary philo-
sophical work on self-knowledge, which Cassam himself thinks has the emphasis in the wrong place.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 PROBLEMS OF SELF - KNOWLEDGE

controversial examples of knowledge of one’s mental states, because they are


either factive or object-entailing: if I know that it’s raining, it is raining; if
I remember being at the pub then I was at the pub; if I see a kangaroo then
there is a kangaroo I see, and similarly if this paper looks white. It will be assumed
throughout that these are genuine examples of mental states. One reason for this
assumption will be defended in the course of this book: the right account of self-
knowledge makes no important distinction between the controversial and uncon-
troversial examples.3
Although the ‘self ’ in ‘self-knowledge’ can mislead, ‘knowledge’ is le mot juste.
The topic is knowledge of one’s mental states; weaker epistemological notions like
justified belief will hardly be mentioned at all. Sometimes the notion of evidence
will be used. Following Williamson (2000: ch. 9), it will be assumed that one’s
evidence is one’s knowledge: P is part of one’s body of evidence iff one knows
P (“E=K”). There is a lot to be said for E=K, but it is disputable; those who dispute
it may be reassured that it plays a minor role in the argument to come.

1.2 Transparency
This book is also about “transparency,” which can be traced to two much-quoted
passages from G. E. Moore’s “The Refutation of Idealism”:
And, in general, that which makes the sensation of blue a mental fact seems to escape us: it
seems, if I may use a metaphor, to be transparent—we look through it and see nothing but
the blue; we may be convinced that there is something but what it is no philosopher,
I think, has yet clearly recognised. (1903: 446)

And:
[T]he moment we try to fix our attention upon consciousness and to see what, distinctly,
it is, it seems to vanish: it seems as if we had before us a mere emptiness. When we try to
introspect the sensation of blue, all we can see is the blue: the other element is as if it were
diaphanous. (450)

Wittgenstein made a similar observation:

Look at the blue of the sky and say to yourself “How blue the sky is!”—When you do
it spontaneously—without philosophical intentions—the idea never crosses your mind
that this impression of colour belongs only to you . . . you have not the feeling of pointing-
in-to-yourself, which often accompanies ‘naming the sensation’ when one is thinking
about ‘private language’. (1958: §275)

3
For an argument that knowing is a mental state, see Williamson 2000: ch. 1.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. TRANSPARENCY 

The importance of Moore’s and Wittgenstein’s remarks was pointed out much
later by Shoemaker (1990: 101). Suppose one sees a blue mug. If one tries to
“introspect” one’s perceptual experience of the mug, or “sensation of blue,” one
apparently comes up empty-handed. The only objects and properties available
for awareness are the mug and its (apparent) properties, such as blueness. As
Moore says, “we . . . see nothing but the blue.”4
On the face of it, we are not aware of our perceptual experiences or their
properties, at least in nothing like the way we are aware of mugs and their
properties. Yet, of course, we do know that we have perceptual experiences
(at least in a philosophically innocuous sense of ‘perceptual experiences’): one
might know, for instance, that one sees a blue mug, or that the mug looks blue to
one. It is a short step from this to another claim that often goes under the name of
‘transparency,’ namely that one knows that one sees a blue mug, or that the mug
looks blue to one, by attending to the mug. To find out that one sees a blue mug,
one does not turn one’s attention inward to the contents of one’s own mind—
Moore’s and Wittgenstein’s remarks suggest either that there is no such proced-
ure or, if there is, it is not necessary. Rather, one turns one’s attention outward, to
the mug in one’s environment. This insight—if that is what it is—was first clearly
expressed by Evans:

[A] subject can gain knowledge of his internal informational states [his “perceptual
experiences”] in a very simple way: by re-using precisely those skills of conceptualization
that he uses to make judgements about the world. Here is how he can do it. He goes
through exactly the same procedure as he would go through if he were trying to make a
judgement about how it is at this place now . . . he may prefix this result with the operator
‘It seems to me as though . . . ’ (1982: 227)5

Or, Evans might have added, the subject may, after looking at the scene before his
eyes, prefix an appropriate phrase (‘a blue mug’) with ‘I see.’
As Evans also noted, a similar point also seems plausible for belief:
[I]n making a self-ascription of belief, one’s eyes are, so to speak, or occasionally literally,
directed outward—upon the world. If someone asks me “Do you think there is going to be
a third world war?,” I must attend, in answering him, to precisely the same outward
phenomena as I would attend to if I were answering the question “Will there be a third
world war?” (1982: 225)6

4
See also the epigraph to Chapter 6 (a quotation from Ryle). The use of ‘transparency’ in this
book is fairly standard in the self-knowledge literature, but sometimes the term is used differently,
for example Shoemaker 1994: 224–5, Wright 2000: 15, Bilgrami 2006: 31, and Carruthers 2011: 8.
5
The elision conceals an important qualification, which is discussed later (section 6.2.9).
6
See also Dretske 1994, 1995. A similar view can be found in Husserl (Thomasson 2003, 2005).
See also section 1.5.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 PROBLEMS OF SELF - KNOWLEDGE

This sort of “transparent” procedure for gaining knowledge of one’s mental states
is, apparently, strictly limited to belief, knowledge, and perception. For example,
I do not usually discover that I want a beer by realizing that I don’t have one. But
one might think that Evans has at least exhibited the source of some of our self-
knowledge.7
However, the Evans-style procedure is, in Moran’s phrase, “terribly problem-
atic” (2003: 404). Suppose a certain coffee mug in one’s environment is blue—how
is that relevant to the hypothesis that one sees a blue mug? That this mug is blue is
a non-psychological fact that is exceptionally feeble evidence for the proposition
that one sees a blue mug. The mug would be blue whether or not one sees it. Yet—
or so it appears—one needs no other evidence to know that one sees a blue mug.8
Similarly, one can know that one believes (or knows) that it’s raining by appealing
to the rain. But surely meteorology sheds little light on psychology!
On the one hand, there is the attractive idea that we follow a “transparent”
procedure when seeking to discover our beliefs and perceptions. On the other
hand, this procedure seems worse than useless for gaining self-knowledge.
A dilemma threatens: either we must reject Evans’ claim that we gain knowledge
of our mental lives by (“so to speak”) directing our eyes outward, or accept that a
significant portion of our “self-knowledge” is nothing of the kind, because based
on wholly inadequate evidence. This is the puzzle of transparency. Solving it is the
key to the overall argument of this book. The overall argument, and the role of
the puzzle of transparency, will be outlined at the end of this chapter (section 1.6),
after a short discussion of the place of self-knowledge in the history of philosophy
(section 1.5). But first, various distinctions need to be explained.

1.3 Privileged and peculiar access


1.3.1 McKinsey and Ryle
In his classic paper “Anti-Individualism and Privileged Access,” McKinsey starts
by saying that
[i]t has been a philosophical commonplace, at least since Descartes, to hold that each of us
can know the existence and content of his own mental states in a privileged way that is
available to no one else. (McKinsey 1991, emphasis added)

7
See also Edgeley 1969: 90 (acknowledged in Gallois 1996: 6, n. 13 and Moran 2001: 61) and
Gordon 1996.
8
Moore, in fact, did not see much of a problem here. The “diaphanous” passage quoted above
continues: “Yet it [the ‘other element’] can be distinguished if we look enough, and if we know that
there is something to look for” (1903: 450). For discussion, see Hellie 2007b.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. PRIVILEGED AND PECULIAR ACCESS 

This “philosophical commonplace” consists of two distinct claims. The first is


that we have privileged access to the existence and content of our mental states.
Privileged access is a comparative notion, and comes in degrees. For our pur-
poses, the following rough characterization will serve: beliefs about one’s mental
states acquired through the usual route are more likely to amount to knowledge
than beliefs about others’ mental states (and, more generally, corresponding
beliefs about one’s environment). The second part of McKinsey’s “philosophical
commonplace” is that we have peculiar access to our mental states: as McKinsey
says, we know about them “in a . . . way that is available to no one else.”
The first piece of jargon (‘privileged access’) is due to Ryle and is often used in
the literature on self-knowledge, although with no standard meaning. On the
view that Ryle is concerned to attack, “[a] mind has a twofold Privileged Access to
its own doings” (Ryle 1949: 148). The first kind of privileged access is that

(1) . . . a mind cannot help being constantly aware of all the supposed occupants of its
private stage, and (2) . . . it can also deliberately scrutinize by a species of non-sensuous
perception at least some of its own states and operations. (148)

And the second kind is that


both this constant awareness (generally called ‘consciousness’), and this non-sensuous
inner perception (generally called ‘introspection’) [are] exempt from error. (149)

The first kind of privileged access is a specific version of (what is here called)
peculiar access. As Ryle says, his opponent supposes that “I cannot introspect-
ively observe, or be conscious of, the workings of your mind” (149). Ryle’s second
kind of privileged access concerns the epistemic security of beliefs about one’s
mental states; as noted, ‘privileged access’ in this book labels a relatively weak
form of epistemic security.
McKinsey, presumably following Ryle, seems to use ‘privileged access’ for what
is described in our preferred terminology as privileged and peculiar access (other
examples are Alston 1971, Moran 2001: 12–13, and Fernández 2013: 7).9 What-
ever the labels, as will be argued in sections 1.3.2 and 1.3.3, it is important to keep
the two sorts of access separate.
1.3.2 Privileged access
Consider Jim, sitting in his office cubicle. Jim believes that his pen looks black to
him; that he wants a cup of tea; that he feels a dull pain in his knee; that he

9
As a further illustration of the confusing variation in usage, Carruthers 2011: 14 uses ‘privileged
access’ to label what is here called ‘peculiar access.’ Fernández labels close approximations to
privileged and peculiar access ‘strong access’ and ‘special access,’ respectively (2013: 5–6).
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 PROBLEMS OF SELF - KNOWLEDGE

intends to answer all his emails today; that he is thinking about paperclips; that he
believes that it is raining. Jim also has various equally humdrum beliefs about his
environment: that it is raining, that his pen is black, and so on. Furthermore, he
has some opinions about the psychology of his officemate Pam. He believes that
her pen looks green to her; that she wants a cup of coffee; that her elbow feels
itchy; that she is thinking about him; that she believes that it is raining.
In an ordinary situation of this kind, it is natural to think that Jim’s beliefs
about his current mental states are, by and large, more epistemically secure than
his corresponding beliefs about his officemate Pam and his corresponding beliefs
about his environment. Take Jim’s belief that he believes that it is raining, for
example. It is easy to add details to the story so that Jim fails to know that it is
raining; it is not so clear how to add details so that Jim fails to know that he
believes that it is raining. Perhaps Jim believes that it is raining because Pam came
in carrying a wet umbrella, but the rain stopped an hour ago. Jim is wrong about
the rain, but he still knows that he believes that it is raining—this knowledge will
be manifest from what he says and does.
Now contrast Jim’s belief that he believes that it is raining with his belief that
Pam believes that it is raining. Again, it is easy to add details to the story so
that Jim fails to know that Pam believes that it is raining. Perhaps Jim believes
that Pam believes that it is raining because he entered the office wearing a visibly
wet raincoat. Yet Pam might well not have noticed that the raincoat was wet, or
she might have noticed it but failed to draw the obvious conclusion.
Similar remarks go for Jim’s belief that his pen looks black (to him, now),
which we can contrast with Jim’s belief that his pen is black, and his belief that
Pam’s mug looks green to her. Of course, there is one glaring difference between
this example and the previous one, which is that Jim might well be wrong in
taking the item he is holding to be his pen—maybe it’s Pam’s pen, or Jim’s pencil.
But keeping his pen (and Pam’s mug) constant, Jim’s belief that his pen looks
black to him is more likely to be in good shape than his belief that his pen is black
(perhaps it’s blue, but just looks black under the office lighting), and his belief
that Pam’s mug looks green to her (perhaps, despite facing the mug, Pam’s gaze is
directed elsewhere).
Take one more example: Jim’s belief that he wants a cup of tea, which can be
contrasted with Jim’s belief that Pam wants a cup of coffee. Now it may well be
that, in general, beliefs about one’s own desires are somewhat less secure than
beliefs about one’s own beliefs, or beliefs about how things look. This is more
plausible with other examples; say, Jim’s belief that he wants to be the CEO of
Dunder Mifflin Paper Company, Inc., or wants to forever remain single—it
would not be particularly unusual to question whether Jim really has these
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. PRIVILEGED AND PECULIAR ACCESS 

particular ambitions. And perhaps, in the ordinary circumstances of the office,


Jim might even be wrong about his desire for tea. Still, Jim’s claim that he wants
tea would usually be treated as pretty much unimpeachable, whereas his claim
that Pam wants coffee is obviously fallible. (Jim’s evidence points in that direc-
tion: Pam normally has coffee at this time, and is heading to the office kitchen.
However, she drank her coffee earlier, and now wants a chocolate biscuit.) And
treating Jim as authoritative about his own desires has nothing, or not much, to
do with politeness or convention. Jim earns his authority by his subsequent
behavior: Jim will drink an available cup of tea and be visibly satisfied.
The extent and strength of privileged access is disputable; the fact of it can
hardly be denied.10
If one has privileged access to the fact that one is in mental state M, this does
not imply that one’s belief that one is in M is completely immune from error, or is
guaranteed to amount to knowledge. Privileged access, then, is a considerably
watered-down version of what is sometimes called infallible, incorrigible, or
indubitable access:
IN Necessarily, if S believes she is in M, then she is in M/knows that she is in M.
What is sometimes called self-intimating access is a near-converse of IN:
S-I Necessarily, if S is in M, she knows/is in a position to know that she is in M.11
Privileged access should not be confused with S-I and its watered-down variants,
which have little to recommend them, at least unless highly qualified and
restricted. Post-Freud, the idea that there are subterranean mental currents not
readily accessible to the subject is unexceptionable, even if Freud’s own account
of them is not. Here is a perfectly ordinary prima facie example of believing that
p (and possessing the relevant concepts) without knowing, or even being in a
position to know, that one believes that p. Pam is now not in a position to retrieve
the name of her new officemate Holly (it is on the “tip of her tongue”) and so is
not in a position to verbally self-ascribe the belief that her officemate’s name is

10
Schwitzgebel (2008, 2011) argues that there is much about our own mental lives that we don’t
know, or that is difficult for us to find out, for instance the vividness of one’s mental imagery,
whether one has sexist attitudes, and so forth. His treatment of individual examples may be
questioned (on mental imagery see Peels 2016), but his overall argument is an important corrective
to the tendency to think of the mind as an internal stage entirely open to the subject’s view. However,
too much emphasis on this point can lead to the opposite vice, of thinking that self-knowledge poses
no especially challenging set of epistemological problems.
11
In the terminology of Williamson 2000: ch. 4, S-I is the thesis that the condition that one is in
M is “luminous.” For versions of both IN and S-I that employ belief instead of knowledge, see
Armstrong 1968: 101; as Armstrong notes, the term ‘self-intimation’ is due to Ryle.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 PROBLEMS OF SELF - KNOWLEDGE

‘Holly.’ Nothing else in her behavior, we may suppose, indicates that she believes
that she has this belief. But she nonetheless does believe now that her officemate’s
name is ‘Holly,’ because otherwise there would be no explanation of why she
recalls the name later when taking the train home.12
Some sort of self-intimating access is more plausible for perceptual states and
bodily sensations than for attitudes like belief, desire, and intention. This will be
briefly discussed in Chapter 2 (section 2.2.5), but for the most part the emphasis
will be on privileged access.
1.3.3 Peculiar access
Turn now to the second part of McKinsey’s “philosophical commonplace,” that
one can come to know about one’s mental life “in a way that is available to no one
else.” Admittedly, there is much to be said for the importance of behavioral
evidence to self-knowledge, but it is clear that one does not rely on such sources
alone.13 Quietly sitting in his cubicle, Jim can know that he believes that it’s raining
and that he wants a cup of tea. No third-person or behavioral evidence is needed.
To know that Pam wants a coffee requires a different sort of investigation—asking
her, observing what she does, and so forth. One has peculiar access to one’s mental
states: a special method or way of knowing that one believes that the cat is indoors,
that one sees the cat, that one intends to put the cat out, and so on, which one
cannot use to discover that someone else is in the same mental state.14
Our access to others’ minds is similar to our access to the non-psychological
aspects of our environment. Jim knows that his pen is black by seeing it; Pam could
know the same thing by the same method. Likewise, Jim knows that Pam wants a
coffee by observing her behavior. Anyone else—including Pam—could know the
same thing by the same method. Our peculiar access to our own minds is not like
this: one can come to know that one wants a coffee without observing oneself at all.
It is often claimed that one knows one’s mind “directly,” or “without evidence.”
(For the former see, e.g., Ayer 1959: 58; for the latter, see, e.g., Davidson 1991b: 205.)

12
And if Pam does believe that she believes that her officemate’s name is ‘Holly,’ the question
whether she believes that arises, and so on through progressively more iterations. This regress stops
somewhere, presumably.
13
Cf. Davidson 1987: 152, Boghossian 1989: 7–8.
14
Farkas proposes that the mental can be defined in terms of peculiar access (“special access,” in
her terminology): “the mental realm is . . . the area that is known by me in a way that is known by no
one else” (2008: 22). She then notes the objection that “one has special access” to non-mental
features of one’s body, “say to one’s stomach fluttering” (33). She replies that the difference is that
the special access claim for the mental is necessary, whereas the special access claim for the non-
mental is merely contingent or “practical” (35). For the purposes of this book it is better to proceed
in a more neutral fashion, without attempting the ambitious project of defining the mental.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. PRIVILEGED AND PECULIAR ACCESS 

If that is right, and if one knows others’ minds “indirectly,” or “with evidence,” then
this is what peculiar access consists in—at least in part. But investigation of this is
best left for later chapters.
Sometimes peculiar access is glossed by saying that self-knowledge is “a priori”
(see, e.g., McKinsey 1991, Boghossian 1997). This should be resisted. One leading
theory of self-knowledge classifies it as a variety of perceptual knowledge, in many
respects like our perceptual knowledge of our environment. “The Perception of
the Operations of our own Minds within us,” according to Locke, “is very like [the
perception of ‘External Material things’], and might properly enough be call’d
internal Sense” (Locke 1689/1975: 105). Armstrong holds such an inner-sense
theory:

Kant suggested the correct way of thinking about introspection when he spoke of the
awareness of our own mental states as the operation of ‘inner sense’. He took sense
perception as the model of introspection. By sense-perception we become aware of
current physical happenings in our environment and our body. By inner sense we become
aware of current happenings in our own mind. (Armstrong 1968: 95)

On the inner-sense theory, we have an internal “scanner” specialized for the


detection of our mental states. No doubt the hypothesized inner sense is not
exactly like our outer senses—Ryle, in the quotation in section 1.3.1, characterizes
it as “non-sensuous perception”—but it is surely unhelpful to classify its deliver-
ances with our knowledge of mathematics and logic.15
1.3.4 The independence of privileged and peculiar access
It is important to distinguish privileged from peculiar access because they can come
apart in both directions. Consider Ryle, who can be read as holding that we have
access to our own minds in the same way that we have access to others’ minds—by
observing behavior—thus denying that we have peculiar access. “The sorts of things
that I can find out about myself are the same as the sorts of things that I can find
out about other people, and the methods of finding them out are much the same”
(Ryle 1949: 155, emphasis added). Yet Ryle thinks that we have privileged access
to (some of) our mental states, because we (sometimes) have better behavioral
evidence about ourselves—greater “supplies of the requisite data” (155):

The superiority of the speaker’s knowledge of what he is doing over that of the listener
does not indicate that he has Privileged Access to facts of a type inevitably inaccessible to

15
See McFetridge 1990: 221–2, Davies 2000b: 323. The classification of (much) self-knowledge as
a priori has its roots in Kant’s definition of a priori knowledge as “knowledge absolutely independent
of all experience” (Kant 1787/1933: B3; see McGinn 1975/6: 203 and McFetridge 1990: 225).
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 PROBLEMS OF SELF - KNOWLEDGE

the listener, but only that he is in a very good position to know what the listener is in a
very poor position to know. The turns taken by a man’s conversation do not startle or
perplex his wife as much as they had surprised and puzzled his fiancée, nor do close
colleagues have to explain themselves to each other as much as they have to explain
themselves to their new pupils. (179)

This Rylean position shows why privileged access need not be peculiar. To see
why peculiar access need not be privileged, imagine a proponent of inner sense
who holds that one’s “inner eye” is very unreliable by comparison with one’s
outer eyes. The psychologist Karl Lashley likened introspection to astigmatic
vision, claiming that “[t]he subjective view is a partial and distorted analysis”
(1923: 338).16 On this account, we have peculiar but underprivileged access.
We need not resort to hypothetical examples to illustrate peculiar without
privileged access. For instance, the epistemic security of self-ascriptions of certain
emotions or moods is nothing to write home about. I may have peculiar access to
the fact that I am depressed or anxious, but here the behaviorist greeting—
‘You’re fine! How am I?’—is not much of a joke, being closer to ordinary wisdom.
Factive mental states, like knowing that Ford directed The Searchers and
remembering that the Orpheum closed down last week, provide further examples.
Since knowing that Ford directed The Searchers entails that Ford directed The
Searchers, but not conversely, it is easier to know the latter fact than to know
that one knows it.17 The belief that I know that Ford directed The Searchers is
less likely to amount to knowledge than the belief that Ford directed The
Searchers. Yet I have peculiar access to the fact that I know that Ford directed
The Searchers, just as I have peculiar access to the fact that I believe this
proposition. Jim knows that Pam knows that Ford directed The Searchers because
(say) he knows she is a movie buff and such people generally know basic facts
about John Ford. But to know that he knows that Ford directed The Searchers,
Jim need not appeal to this kind of evidence about himself.

1.3.5 Peculiar access and McKinsey’s puzzle


According to externalism about mental content, what someone thinks and
believes is an extrinsic matter, depending (in part) on her environment. The
apparent problem this poses for self-knowledge can be brought out by the

16
This quotation, together with Lashley’s comparison with astigmatism, appears in Lyons 1986: 29.
See also Freud 1938: 542. Ryle’s characterization of “inner perception” denies that there are “any
counterparts to deafness, astigmatism” (1949: 164).
17
With the assumption that one can know that p without knowing that one knows that p.
(In other words, with the assumption that a strong version of the “KK principle” is false; see also
section 5.5.1.)
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. PRIVILEGED AND PECULIAR ACCESS 

following argument (McKinsey 1991, Boghossian 1997). According to one


version of externalism, I can only think that water quenches thirst if my envir-
onment contains water—more weakly, if water exists at some time or another.
Further, supposedly I may know this fact a priori, on the basis of “twin earth”
thought experiments. Here am I, sitting in my armchair. I have peculiar access to
the fact that I am thinking that water quenches thirst—I know that I am thinking
that water quenches thirst without rising from the armchair and conducting an
empirical investigation of my behavior or my environment. I know, a priori, that
if I am thinking that water quenches thirst then water exists. Hence, putting the
two pieces of knowledge together, I can discover from my armchair that water
exists, which seems absurd.
As this argument brings out, the alleged conflict between self-knowledge and
externalism has its primary source in peculiar access, not privileged access. Unless
privileged access is taken to be something like infallible access, there is not even
the appearance of conflict with externalism. If I have privileged access to the
fact that I am thinking that water quenches thirst, then my belief that I am
thinking this is more likely to amount to knowledge than my belief that someone
else is thinking that water quenches thirst, or my belief that water does quench
thirst. Since the existence of water is a priori implied by the fact that I am
thinking that water quenches thirst, we have the following result: my belief
that water exists is more likely to amount to knowledge than my belief that
someone else is thinking that water quenches thirst, or my belief that water does
quench thirst. And that is not especially alarming. Peculiar access is the culprit:
McKinsey’s argument gives rise, as Davies (2000a) says, to the problem of
armchair knowledge.

1.3.6 Empirical work


Two central working assumptions of this book are that we have peculiar access to
our mental states, and that we have some degree of privileged access to many
mental states. Psychology, it might be thought, has shown these assumptions to
be dubious at best. “Any people who still believe that they know what they want,
feel, or think, should read this fascinating book,” proclaims one piece of blurb on
the back of the psychologist Timothy Wilson’s Strangers to Ourselves (2002). (This
advice has the interesting characteristic that if its presupposition is correct, and
one does not know what one thinks or believes, then it is impossible to follow.)
As noted in section 1.3.5, Ryle seems to question the first assumption of
peculiar access; one of his psychologist counterparts is Daryl Bem, whose “self-
perception theory” asserts that “Individuals come to ‘know’ their own attitudes,
emotions, and other internal states partially by inferring them from observations
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 PROBLEMS OF SELF - KNOWLEDGE

of their own overt behavior and/or the circumstances in which this behavior
occurs” (1972: 5).
In fact, Ryle’s position is not as straightforward as some quotations from The
Concept of Mind suggest. Admittedly, he does say that “in principle, as distinct
from practice, John Doe’s way of finding out about John Doe are the same as John
Doe’s ways of finding out about Richard Roe” (1949: 149), a view we can call
Ryleanism. But a few pages later we find him casually mentioning that “a person
may pay sharp heed to very faint sensations” (151), and a little further on he
remarks that “I can catch myself daydreaming [and] catch myself engaged in a
piece of silent soliloquy” (160). Ryle does not even attempt to exhibit the third-
person method by which I can pay sharp heed to very faint sensations, or by
which I can catch myself daydreaming; he seems not to recognize that these
examples pose a problem. And perhaps they don’t—the quotation about John
Doe might well exaggerate Ryle’s considered view. Ryle may not be a Rylean.
Removing Ryle’s rhetorical excesses brings him very close to Bem, who does
not deny peculiar access, claiming instead that it is not the whole epistemological
story: “To the extent that internal cues are weak, ambiguous, or uninterpretable,
the individual is functionally in the same position as an outside observer, an
observer who must necessarily rely upon those same external cues to infer the
individual’s inner states” (1972: 5). And there are many “internal cues”: “All of us
have approximately 3–4ft of potential stimuli inside of us which are unavailable
to others but which are available to us for self-attributions” (40). And Wilson
himself, with the benefit of thirty years’ more research, is at pains to emphasize
that peculiar access is not being denied: “I can bring to mind a great deal of
information that is inaccessible to anyone but me. Unless you can read my mind,
there is no way you could know that a specific memory just came to mind” (2002:
105; see also 204–6).
So peculiar access is not in serious dispute, although there is certainly room for
debate about the importance of third-person (“behavioral”) access to one’s
mental states.18 What about privileged access?
As also noted in section 1.3.5, Ryle did think that we have privileged access to
our mental states. But he does make some qualifications: the differences between
self- and other-knowledge “are not all in favour of self-knowledge” (1949: 149).

18
If Ryle’s remarks about paying heed to sensations and catching oneself engaged in silent
soliloquy are taken seriously, the result is neo-Ryleanism: self-knowledge is mostly the result of
turning one’s third-person mindreading faculty on to oneself, with an exception for certain “internal
cues” like sensations and inner speech, which form an important part of one’s evidential base. (Some
other account needs to be given for our knowledge of the “internal cues.”) Carruthers 2011 is a
recent defense of neo-Ryleanism; for critical discussion, see Byrne 2012, Bermudez 2013.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. PRIVILEGED AND PECULIAR ACCESS 

And infallible access is rejected across the board: people are not “exempt from
error” about their “organic sensations” (151); “[t]hey mistakenly suppose them-
selves to know things which are actually false; they deceive themselves about their
own motives” (155). Perhaps Ryle does not go far enough. Does empirical
research show not just that mistakes are made, but that privileged access is slight
to non-existent?
Here the inevitable citation is Nisbett and Wilson’s classic “Telling more than
we can know” (1977b), which concluded that “people sometimes make assertions
about mental events to which they may have no access and these assertions may
bear little resemblance to the actual events. The research reviewed is . . . consistent
with the most pessimistic view concerning people’s ability to report accurately
about their cognitive processes” (247).
Nisbett and Wilson reviewed a number of studies from the dissonance and
attribution literature, and summarily described some of their own. One was the
famous “stocking” experiment, in which subjects confabulated plausible-
sounding reasons why they chose one of four pairs of identical stockings as
being the best quality (1977b: 243–4). In another experiment, subjects mistakenly
credited their negative attitude toward a college teacher to his appearance,
mannerisms, and accent, and failed to realize that the true cause was his forbid-
ding personality (244–5). And in another, subjects mistakenly claimed that a
distracting noise had affected their ratings of a film (245–6).
As these examples and the subtitle of the paper (‘Verbal reports on mental
processes’) suggest, if this experimental work is correct, it chiefly impugns a subject’s
explanations for her attitudes, beliefs, or behavior, not her attributions of mental
states.19 Indeed, Nisbett and Wilson took for granted that subjects can answer
questions about their attitudes and beliefs accurately. For instance, in the second
experiment above, subjects were asked, “How much do you think you would like this
teacher?,” and their responses were assumed to be correct (Nisbett and Wilson
1977a: 253). Nisbett and Wilson themselves were quite explicit on this point:
[W]e do indeed have direct access to a great storehouse of private knowledge . . . The
individual knows the focus of his attention at any given point in time; he knows what his
current sensations are and has what almost all psychologists and philosophers would
assert to be ‘knowledge’ at least quantitatively superior to that of observers concerning his
emotions, evaluations, and plans . . . The only mystery is why people are so poor at telling
the difference between private facts that can be known with near certainty and mental
processes to which there may be no access at all. (1977b: 255)

19
Perhaps not surprisingly, the experimental work faces some serious challenges (see, e.g., Newell
and Shanks 2014).
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 PROBLEMS OF SELF - KNOWLEDGE

Although there is certainly room for debate about the precise extent and degree of
privileged access, it is not in serious dispute either.

1.4 Economy, inference, detection, unification


On one view of our knowledge of language, it requires nothing more than a
general-purpose learning mechanism. On the alternative Chomskean picture, it
requires a dedicated faculty, a “language organ.” The first view is an economical
account of our linguistic knowledge: no special-purpose epistemic capacities are
required. The Chomskean view, on the other hand, is extravagant: given the
meager input—the “poverty of the stimulus”—the general-purpose mechanism is
supposed incapable of generating the required torrential output.
On one view of our knowledge of metaphysical modality, it requires a special
epistemic capacity of modal intuition. On the alternative Williamsonian picture,
it requires nothing more than our “general cognitive ability to handle counter-
factual conditionals” (Williamson 2004: 13), such as “If it had rained I would
have taken my umbrella.” The Williamsonian view is an economical account of
our knowledge of metaphysical modality: all it takes are epistemic capacities
required for other domains. The first view, on the other hand, is extravagant:
knowledge of metaphysical modality needs something extra.
A similar “economical-extravagant” distinction can be drawn for self-
knowledge. Let us say that a theory of self-knowledge is economical just in case
it explains self-knowledge solely in terms of epistemic capacities and abilities that
are needed for knowledge of other subject matters; otherwise it is extravagant.20
Ryleanism is economical: the capacities for self-knowledge are precisely the
capacities for knowledge of the minds of others. The theory defended in
Shoemaker 1994 is also economical: here the relevant capacities are “normal
intelligence, rationality, and conceptual capacity” (236). The inner-sense theory,
on the other hand, is extravagant: the organs of outer perception, our general
rational capacity, and so forth do not account for all our self-knowledge—for
that, an additional mechanism, an “inner eye,” is needed.
Accounts of self-knowledge may be classified in other ways. According to
inferential accounts, self-knowledge is the result of inference or (theoretical)

20
A qualification: knowledge of the “other subject matters” should not itself require mental
evidence. On one traditional view—discussed further in section 1.5—all empirical knowledge is
founded on mental evidence about perceptual appearances. On that view the correct theory of self-
knowledge is extravagant, not economical.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. ECONOMY , INFERENCE , DETECTION , UNIFICATION 

reasoning, which will be assumed to involve causal transitions between belief


states.21 Thus if one reasons from P to Q (or, equivalently, infers P from Q),
one’s belief in P causes one’s belief in Q.22 As Harman 1986 emphasizes, the theory
of reasoning should not be confused with logic, the theory of entailment, or
implication. Bearing that cautionary note in mind, mixing the terminology of
reasoning and logic will be convenient: if one reasons solely from P to Q, one will
be said to reason in accordance with an argument whose sole premise is P and
whose conclusion is Q.
The transparency procedure mentioned in section 1.2 is naturally understood
as inferential: one (allegedly) comes to know that one believes that p by inference
from the premise that p. The puzzle of transparency can then be simply put: the
argument p, so I believe that p is obviously invalid and does not represent a
pattern of reasoning that can confer knowledge on the conclusion.
As usually understood, Ryleanism is also an inferential account, while the
inner-sense theory is not. If knowledge via inner sense involves an inference, then
it presumably is one from the “appearances” of mental states to their presence.
Yet—a point often made by opponents of the inner-sense theory—there are no
such appearances.
Detectivist accounts liken self-knowledge to ordinary empirical knowledge in
the following two abstract respects. First, causal mechanisms play an essential
role in the acquisition of such knowledge, linking one’s knowledge with its
subject matter. Second, the known facts are not dependent in any exciting
sense on the availability of methods for detecting them, or on the knowledge of
them—in particular, they could have obtained forever unknown.23

21
There is a broader (and less common) use of ‘inference’ on which certain transitions between
perceptual states (conceived as completely distinct from beliefs) and beliefs are inferences. That
usage is not followed in this book.
Suppositional reasoning is often taken to involve causal transitions between mental states of
supposing, in addition to states of believing. An alternative view is suggested by the (frequently
noted) close connection between supposing and conditionals. On this alternative, there is no mental
state of supposing. So-called suppositional reasoning is simply reasoning with conditionals: to
conclude that q on the basis of supposing that p is (to a first approximation) to prime oneself
with the proposition that p and thereby come to believe that if p then q. In any event, in this book
suppositional reasoning will be set aside.
22
“Deviant causal chains” (see, e.g., Peacocke 1979: ch. 2) show that the converse is false.
23
‘Detectivism’ is borrowed from Finkelstein 2003; however, his corresponding explanation is
rather different: “A detectivist is someone who believes that a person’s ability to speak about her own
states of mind as easily, accurately, and authoritatively as she does may be explained by a process by
which she finds out about them” (9). Inner-sense theories are supposed to be paradigmatic examples.
Finkelstein argues in his first chapter that detectivism (in his sense) is incorrect. (‘Detectivism’ is
originally a (not unconnected) term of Wright’s, used to mark one reading of “response-dependent”
biconditionals: see Wright 1992: 108 and cf. Finkelstein 2003: 28–9, n. 1.)
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 PROBLEMS OF SELF - KNOWLEDGE

Ryleanism and the inner-sense theory are detectivist accounts. Shoemaker’s


theory is not, because it fails the second condition: “there is a conceptual,
constitutive connection between the existence of certain sorts of mental entities
and their introspective accessibility” (Shoemaker 1994: 225). If detectivism is
false, self-knowledge is indeed strange. Yet suspicion of detectivism is quite
widespread.24
Ryleanism is a unified theory of self-knowledge. For any mental state M, the
account of how I know I am in M is broadly the same: by observing my behavior.
A simple inner-sense theory is also unified: I know that I am in M by focusing my
“inner eye.” But some theorists adopt a divide-and-conquer strategy. For
instance, Davidson (1984a) and Moran (2001), whose views are examined in
Chapter 3, offer accounts primarily of our knowledge of the propositional atti-
tudes, in particular belief. Knowledge of one’s sensations, on the other hand, is
taken to require a quite different theory, which neither of them pretends to
supply. “[T]he case of sensations,” Moran writes, “raises issues for self-knowledge
quite different from the case of attitudes of various kinds” (10).25 Similar divisions,
although less sharply emphasized, are present in the theories of self-knowledge
defended in Goldman 1993 and Nichols and Stich 2003. And perhaps Ryle’s more
considered view (see section 1.3.4) is only partly unified: I discover that I expect
rain by observing my behavior, but discover that I am daydreaming by another
(presumably extravagant) method entirely.26

1.5 Self-knowledge as a philosophical problem


The central problem of self-knowledge is to explain (or explain away) the
privileged and peculiar access we enjoy to our mental states. It is a striking fact
that recognition that this is a problem came very late in the history of philosophy.
Of course, knowledge or awareness of one’s self—how or whether we know the
self is immortal, immaterial, etc.—has been extensively examined since Plato.
Likewise, perceptual knowledge has been a fixture on the philosophical agenda.
But considerably fewer pages have been devoted to one’s knowledge that one sees

24
Apart from Shoemaker, philosophers who are (at a minimum) unsympathetic with detectivism
include Davidson (1984a, 1987), Bar-On (2000, 2004), Falvey (2000), Wright (2000), Moran (2001),
Finkelstein (2003), Bilgrami (2006), and McDowell (2011).
25
For proposals intended to apply only to the case of sensations, both appealing to (something
like) Russell’s notion of “acquaintance,” see Chalmers 2003 and Gertler 2012.
26
In other words, neo-Ryleanism (fn. 18) is only partly unified; since the neo-Rylean denies that
our capacity for third-person mindreading accounts for all self-knowledge, the theory may not be
entirely economical.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. SELF - KNOWLEDGE AS A PHILOSOPHICAL PROBLEM 

a horse, believes that the grapes have ripened, wants to eat a fig, intends to repair
one’s sandals, and so forth.
More accurately, self-knowledge as a philosophical problem made a late
appearance in the history of modern philosophy. Ancient and medieval philoso-
phy are not so blinkered. Aristotle, in particular, got things off to a promising
start. At the start of chapter 2, book 3 of De Anima he appears to argue that it is
“by sight that one perceives that one sees” (Hamlyn 1968: 47)27—perhaps an
anticipation of the transparency procedure mentioned above in section 1.2.28
Whether or not this was Aristotle’s intent, Aquinas—whose Aristotle commentar-
ies include one on De Anima—certainly anticipated it. For example, in Quaestiones
Disputatae de Veritate he writes:
The power of every capacity of the soul is fixed on its object, and so its action first and
principally tends toward its object. But it can [be directed] at the things directing it toward
its object only through a kind of return. In this way we see that sight is first directed at
color, and is not directed at its act of vision except through a kind of return, when by
seeing color it sees that it sees. (Quoted in Pasnau 2002: 342)29

To offer this kind of account is implicitly to recognize that self-knowledge is a


puzzling phenomenon that stands in need of explanation. Unfortunately that
insight was all but lost by the time of Descartes. After the First Meditation has left
Descartes’ perceptual beliefs in doubt, he claws himself back from the abyss in
the Second by arguing from the premise that he thinks to the conclusion that he
exists. And, as Descartes explains, the reason why the cogito (‘I think therefore
I am’) is considerably better than the ambulo (‘I walk therefore I am’) is that he is
“wholly certain” that he thinks, but entirely doubtful that he walks (Descartes
1642/1984a: 244).30 For Descartes, self-knowledge is not a problem—rather, it is
the solution.
And Descartes’ successors, however unCartesian they may have been in other
respects, tended to join him in taking self-knowledge for granted. Take Hume, for
example. In the section of the Treatise on “scepticism with regard to the senses,”

27
However, in De Somno et Vigilia he denies this (see Everson 1997: 143, fn. 7).
28
Or perhaps, connectedly, an anticipation of Moore’s point about transparency. See Kosman
1975, which discusses the De Anima argument and reproduces the “diaphanous” passage from
Moore quoted above in section 1.2. Kosman takes Moore to be making, or at least approaching,
Sartre’s rather elusive claim that “every positional consciousness of an object is at the same time a
non-positional consciousness of itself” (Sartre 1966: 13).
29
Pasnau explicitly draws the comparison with contemporary discussions of transparency,
quoting Moore (347). Interestingly, according to Pasnau, Aquinas’s followers were “unwilling to
take his account of self-knowledge at face value” (348).
30
For a compelling defense of the interpretation of the cogito as an inference from I think to I
exist, see Wilson 1982: ch. 2.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 PROBLEMS OF SELF - KNOWLEDGE

Hume tells us that “’tis in vain to ask, Whether there be body or not?” (Hume
1740/1978: 187). No good reasons or evidence can be produced for the hypoth-
esis that there are bodies, and we must content ourselves with a psychological
explanation of why we are compelled to believe it. However, there is no corres-
ponding section on “scepticism with regard to introspection,” and that is because
the Treatise begins with Hume’s introspective “accurate survey” (3) of the mind,
as comprising “impressions” and “ideas.” Belief, for instance, is—at any rate in
one of Hume’s formulations—supposed to be a kind of “lively idea” (96), and
both ideas and impressions are “felt by the mind, such as they really are” (189).31
Many assumptions of the vulgar are examined throughout the Treatise: the vulgar
assumption that we have privileged and peculiar access to our beliefs is not one
of them.
At least Kant, like Locke, and unlike Hume and Descartes, had something to
say about the means by which we attain self-knowledge. It is, he thinks, “inner
sense . . . the intuition of ourselves and of our inner state” (Kant 1787/1933: A33/
B50). (Recall the quotations from Locke and Armstrong in section 1.3.3.) Kant
contrasts inner sense with “outer sense” (A22/B37), which is the intuition of
ordinary objects like cats and dogs, and their properties (A19/B33); that is, the
perception of such things by vision, audition, and so on. Apart from adding some
distinctively Kantian twists, he says little more about how inner sense is supposed
to work.32 That he does not regard it as problematic is shown by the fact that he
takes rival idealist views extremely seriously. In particular, he argues against
“material idealism,” “which declares the existence of objects in space outside us
either to be merely doubtful and indemonstrable or to be false and impossible”
(B274). To take material idealism as a live option, as Kant does, is tacitly to agree
that the existence of things inside us is not at all doubtful. That is, it is to hold the
epistemic credentials of inner sense pretty much beyond reproach.33
The general post-Cartesian insouciance about self-knowledge is closely inter-
twined with a certain view about the epistemology of perception, which began to
be dominant in the seventeenth century, and remained so until quite recently.
Consider A. J. Ayer’s The Central Questions of Philosophy, published in 1973.

31
Hume is officially speaking here of “sensations” (perceptual impressions), but he clearly
thinks the same goes for impressions and ideas across the board. For more discussion of Hume,
see section 4.7.
32
The principal additions are that the objects of inner sense appear ordered in time (“time is the
form of inner sense” (A33/B49)), and that inner sense, like outer sense, does not reveal things as they
are in themselves (A38/B55, B67–9).
33
For more discussion of Kant and a comparison with Nietzsche, see Katsafanas 2015.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. SELF - KNOWLEDGE AS A PHILOSOPHICAL PROBLEM 

The central epistemological question of philosophy, Ayer explains, concerns


our knowledge of the external world; also important, although less so, are our
knowledge of others’ minds and our knowledge of past or future events (the
problem of induction). Knowledge of our own minds is not at all central—indeed,
Ayer never bothers to mention the issue. Ayer’s reason for this neglect is easy to
see. Philosophical resources need to be focused on the external world, others’
minds, and past or future events because of the threat of skepticism. What makes
knowledge of these subject matters of special interest is that its very existence is a
matter of some considerable doubt.
What is the skeptical argument that, according to Ayer, “sets the stage for the
theory of knowledge” (1973: 68)? Its first step is “always to assume that the
evidence [falls] short of its conclusion”:

The aim of the sceptic is to demonstrate the existence of an unbridgeable gap between the
conclusions which we desire to reach and the premisses from which we set out. Thus, in
the case of our belief in the existence of physical objects, he will claim that the only
premisses which are supplied to us are propositions which relate exclusively to our sense-
impressions. (63)

Once we have accepted that the premises from which we must draw conclusions
about physical objects exclusively concern our “sense-impressions,” the skeptic is
off and running. The hypothesis that the cat is on the mat, for instance, is not
entailed by my evidence about my sense-impressions. And neither is it the best
explanation of my evidence—Descartes’ evil demon hypothesis, for instance, is
arguably no worse. Since there are no other ways of supporting the hypothesis
that the cat is on the mat, it is not supported: I do not know that the cat is on the
mat. (And similarly with knowledge of others’ minds and the past and future.
Here the role of our sense-impressions is played by, respectively, others’ behavior
and present events.)
Someone who concedes the skeptic’s first step is very likely to find self-
knowledge—or, at least, knowledge of one’s sense-impressions—relatively
unproblematic. When one sort of (putative) knowledge is inferred from know-
ledge of another sort, one may question whether the inference is warranted. No
empirical speculations are necessary: whether E is strong evidence for H can often
be reasonably questioned without rising very far from the armchair. And that, of
course, is exactly what the skeptic goes on to do. But if one sort of (putative)
knowledge is declared to be not inferential, it is much harder for the skeptic to
gain traction. This is not because non-inferential knowledge is especially secure,
but because no positive characterization has been given of why it is knowledge.
In the absence of more details, there is no soft spot for the skeptic to attack. The
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 PROBLEMS OF SELF - KNOWLEDGE

skeptic’s only recourse is to try to shift the burden of proof by lamely insisting
that the so-called knowledge remains guilty until proven innocent.34
This picture, of all our empirical knowledge resting on a foundation of mental
evidence, owes a lot of its popularity to Descartes.35 For instance, in the Sixth
Meditation he puts the skeptical case as follows:
[E]very sensory experience I have ever thought I was having while awake I can also think
of myself as sometimes having while asleep; and since I do not believe that what I seem to
perceive in sleep comes from things located outside me, I did not see why I should be any
more inclined to believe this of what I think I perceive while awake.
(Descartes 1642/1984b: 53)

Grant that Descartes can have the same “sensory experiences” when asleep in his
bed as he has when awake and sitting by the fire. How is this even relevant to the
question of whether Descartes knows that he is sitting by the fire? After all,
Descartes can have the same digestive rumblings when asleep as he has when
awake; that is not relevant at all to the epistemological question. Even the fact that
the same neural firings can occur in his brain is not relevant—or not clearly so.
The obvious answer is that facts about Descartes’ “sensory experiences,” unlike
facts about his digestive rumblings and neural firings, are his foundational layer
of evidence for the hypothesis that he is sitting by the fire.
The skeptical argument set out by Ayer also appears in the Fourth Paralogism
of the first Critique:
1. “[A]ll outer appearances are of such a nature that their existence is not
immediately perceived, and that we can only infer them as the cause of
given perceptions.”
2. “[T]he existence of [that] which can only be inferred as a cause of given
perceptions, has a merely doubtful existence.”
3. “Therefore the existence of all objects of the outer senses is doubtful.”
(Kant 1787/1933: A366/7)
Kant’s response to this argument is to deny the first premise: “in order to arrive
at the reality of outer objects I have just as little need to resort to inference as I have
in regard to the reality of the object of my inner sense, that is, in regard to the
reality of my thoughts” (A371), which might sound as if he is rejecting Descartes’

34
Knowledge of abstract objects (numbers, e.g.) is an exception. The skeptic may argue that the
causally inert subject matter makes our beliefs about abstracta suspect, whether or not they are the
result of inference. But this skeptical move is not available in the present case.
35
Arguably one can find it in ancient skepticism, although this is disputed. For discussion, see
Burnyeat 1982 and Fine 2000.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. SELF - KNOWLEDGE AS A PHILOSOPHICAL PROBLEM 

epistemology. But Kant isn’t, because he thinks that “external objects (bodies) . . .
are mere appearances, and are therefore nothing but species of my representa-
tions” (A370). That is, the skeptical argument is blocked, not by rejecting
the claim about mental evidence, but by claiming that the mental evidence
entails the existence of the cat. This is idealism, but of the “transcendental” sort,
because Kant holds that in addition to empirical objects like cats and mats, there
are “things-in-themselves, which exist independently of us and of our sensibility”
(A371).36
One lesson of the epistemology and philosophy of perception of the last
century is that this Cartesian epistemology of perception is quite misguided.
(Admittedly, this is not a truth universally acknowledged.) Building knowledge of
one’s environment on a slab of mental evidence, assuming for the sake of the
argument that such a construction is possible, is unnecessary, undesirable, and
phenomenologically and biologically implausible. And perhaps more fundamen-
tally, the alleged mental evidence is either obscure or else its very existence is
doubtful. Facts about one’s sense data are the classic example of the second sort of
evidence: it is clear enough what sense data are supposed to be, but equally clear
that there aren’t any. And the obscure first sort of evidence often concerns items
called “experiences,” with little or no accompanying explanation.37
With the demise of Cartesian epistemology, the threat of external-world
skepticism is considerably reduced, if not eliminated. Ordinary environmental
knowledge—knowing that the cat is on the mat, say—thus becomes no more
puzzling than ordinary self-knowledge. Further, once it is accepted that envir-
onmental knowledge is not in general based on self-knowledge, one is free to
contemplate the possibility that the direction of inference is the other way
around. Both Ryle and Evans, in their very different ways, did exactly that.
Evans clearly has the better idea, but now the skeptical argument returns with a
vengeance, in the guise of skepticism about the internal world. On Evans’
account, there would appear to be, in Ayer’s words, “an unbridgeable gap

36
This “two-world” interpretation of Kant, according to which cats and mats are mind-
dependent objects, distinct from mind-independent things-in-themselves, is now somewhat out
of favor. (Langton 2001 and Allison 2004 are two notable examples.) But for present purposes the
fact that this interpretation was historically standard is more relevant than the issue of Kant’s
actual view.
37
Thus Stroud, for example, expounding Descartes’ dreaming argument: “[Descartes] realizes
that his having the sensory experiences he is now having is compatible with his merely dreaming
that he is sitting by the fire with a piece of paper in his hand. So he does not know on the basis of the
sensory experiences he is having at the moment that he is sitting by the fire” (1984: 16). Stroud does
not explain what a “sensory experience” is, or why one’s sensory experience when one sees a piece of
paper may be had when dreaming.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 PROBLEMS OF SELF - KNOWLEDGE

between the conclusions which we desire to reach” (say, that I believe that the cat
is on the mat) “and the premisses from which we set out” (namely, that the cat is
on the mat). The conclusion is about my mental states, and the premise is neither
about myself nor about anything mental.38 So with only mild hyperbole Ayer
may be accused of getting his priorities precisely backwards: the central epis-
temological question of philosophy concerns knowledge of our own minds.

1.6 Preview
A good place to begin our examination of self-knowledge is with the inner-sense
theory, and that is the topic of Chapter 2. Despite the fact that much recent work
on self-knowledge has been devoted to overturning it, in broad outline the inner-
sense theory can seem inevitable. We learn about the external world by special-
ized detection mechanisms; what could be more natural than the suggestion that
this is how we learn about the internal world? On the credit side of the ledger,
numerous objections against the inner-sense theory in the literature are argued to
fail. On the debit side, the theory has no good explanation of privileged access,
among other perplexities. Although the inner-sense theory has not been refuted,
contemporary approaches to self-knowledge typically take a radically different
tack; Chapter 3 examines three prominent examples, due to Davidson, Moran,
and Bar-On, and raises a variety of objections.
Chapter 4 takes up the puzzle of transparency, mentioned above in section 1.2.
The puzzle is argued to apply to sensation, as well as to belief and perception. The
positive account of self-knowledge begins in Chapter 5, which concerns belief.
A solution is offered to the puzzle of transparency for belief; that clears the way to
accept an Evans-style transparency account of how one knows what one believes.
That account is argued to explain both privileged and peculiar access, and to be
economical. It is also detectivist; however, since it is economical (and inferential),
it is not a version of the inner-sense theory. Chapter 6 turns to perception and
sensation, with a similar upshot: an economical, inferential, and detectivist
account that explains privileged and peculiar access.
At this point the issue of unification becomes pressing. Granting that the
transparency account applies to belief, perception, and sensation, what about
the remaining large tracts of the mind? They might seem to need another sort of
epistemological treatment entirely. The remaining chapters argue that this is

38
Of course Ryle also faces an unbridgeable gap, because often one ascribes a mental state to
oneself while having little if any relevant behavioral evidence. (See also section 4.4.)
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. PREVIEW 

not so: contrary to first impressions, all mental states are transparent. Chapter 7
concerns desire, intention, and emotion; Chapter 8, memory, imagination, and
thought. The overall account is unified in three respects. It is uniformly detectivist,
inferential, and economical, and the direction of inference is always from world
to mind.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

2
Inner Sense

The word introspection need hardly be defined—it means, of course, the


looking into our own minds and reporting what we there discover.
James, The Principles of Psychology

‘Introspection’ is a term of art and one for which little use is found in the
self-descriptions of untheoretical people.
Ryle, The Concept of Mind

2.1 Introduction
I know various contingent truths about my environment by perception. For
example, by looking, I know that there is a computer before me; by listening,
I know that someone is talking in the corridor; by tasting, I know that the coffee
has no sugar. I know these things because I have some built-in mechanisms
specialized for detecting the state of my environment. One of these mechanisms
is presently transducing electromagnetic radiation (in a narrow band of wave-
lengths) coming from the computer and the desk on which it sits. How that
mechanism works is a complicated story—to put it mildly—and of course much
remains unknown. But we can at least produce more-or-less plausible sketches of
how the mechanism can start from retinal irradiation, and go on to deliver
knowledge of my surroundings. Moreover, in the sort of world we inhabit,
specialized detection mechanisms that are causally affected by the things they
detect have no serious competition—seeing the computer by seeing an idea of the
computer in the divine mind, for example, is not a feasible alternative.
In addition to these contingent truths about my environment, I know various
contingent truths about my psychology. For example, I know that I see a
computer, that I believe that there is someone in the corridor, that I want a cup
of coffee. How do I know these things? Well, unless it’s magic, I must have some
sort of mechanism (perhaps more than one) specialized for detecting my own
mental states—something rather like my visual, auditory, and gustatory systems,
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. INTRODUCTION 

although directed to my mental life. That is, I have knowledge of contingent


truths about my psychology by a special kind of perception, or,
a little more cautiously, . . . something that resembles perception. But unlike sense-perception,
it is not directed towards our current environment and/or our current bodily state. It is
perception of the mental. Such “inner” perception is traditionally called introspection, or
introspective awareness. (Armstrong 1981a: 60)

This inner-sense theory sounds like scientifically enlightened common sense, and
that is how Paul Churchland presents it. After remarking that self-knowledge
requires that “one must apprehend [one’s mental states] within some conceptual
framework or other that catalogs the various different types,” and that “presum-
ably one’s ability to discriminate subtly different types of mental states improves
with practice and increasing experience,” he writes:
A novelist or psychologist may have a running awareness of her emotional states that is
far more penetrating than the rest of us enjoy. A logician may have a more detailed
consciousness of the continuing evolution of his beliefs . . .
In these respects, one’s introspective consciousness of oneself appears very similar to
one’s perceptual consciousness of the external world. The difference is that, in the former
case, whatever mechanisms of discrimination are at work are keyed to internal states
instead of to external ones. The mechanisms themselves are presumably innate, but one
must learn to use them: to make useful discriminations and to prompt insightful judg-
ments. Learned perceptual skills are familiar in the case of external perception.
A symphony conductor can hear the clarinet’s contribution to what is a seamless
orchestral sound to a child. An astronomer can recognize the planets, and nebulae, and
red giant stars among what are just specks in the night sky to others . . . And so forth. It is
evident that perception, whether inner or outer, is substantially a learned skill. Most of
that learning takes place in our early childhood, of course: what is perceptually obvious to
us now was a subtle discrimination at age two. But there is always room to learn more.
In summary, self-consciousness, on this contemporary view, is just a species of
perception: self-perception. It is not perception of one’s foot with one’s eyes, for example,
but is rather the perception of one’s internal states with what we may call (largely in
ignorance) one’s faculty of introspection. Self-consciousness is thus no more (and no less)
mysterious than perception generally. It is just directed internally rather than externally.
(Churchland 2013: 120–2)1

As Churchland says, the mechanism (or mechanisms) by which we perceive


our inner states is unknown. Given the present state of ignorance, there would
appear to be little that the philosophical proponents of inner sense can contribute

1
See also Russell 1912/98: 28, Freud 1938: 544, Armstrong 1968: ch. 15, 1981b, Lycan 1987: ch. 6,
1996: ch. 2, Nichols and Stich 2003: 160–4, Ten Elshof 2005, Goldman 2006: ch. 9.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

to our understanding of self-knowledge beyond a few pages motivating their


theory, and some general discussion of the epistemology of perception. Philo-
sophers might have something more specific to say once the neural mechanisms
of self-knowledge come into view, but the first investigators on the scene should
be the psychologists and neuroscientists. Not surprisingly, the inner-sense theory
has attracted few book-length defenses.2
As mentioned at the end of Chapter 1, this book is not one of those few: unlike
the inner-sense theory, the account of self-knowledge to be defended here is
economical and inferential. However, like the inner-sense theory, it is detectivist:
causal mechanisms give us access to an independently existing mental realm (see
section 1.4). Since most of the standard objections to the inner-sense theory also
(in effect) target detectivism, that is all the more reason for making an early
assessment of it.

2.2 Against inner sense


The above quotation from Churchland illustrates why the inner-sense theory is of
considerable initial appeal. As Shoemaker remarks, it “can seem a truism” (1994:
223). However, it is not infrequently taken to be a crass mistake. For instance,
according to Wright:
The privileged observation explanation [of “first-third-person asymmetries in ordinary
psychological discourse”] is unquestionably a neat one. What it does need philosophy to
teach is its utter hopelessness. (Wright 2000: 24)3

Pace Wright, this chapter argues that the leading objections leave the inner-sense
theory pretty much unscathed. However, as explained at the end, there are some
residual puzzles.
2.2.1 The object perception model and the broad perceptual model
Let us start with something that is best thought of as some useful ground-
clearing, rather than an objection. Shoemaker (1994) presents a comprehensive
tabulation of the disanalogies between inner sense and paradigms of “outer

2
A short book defending the inner-sense theory is Ten Elshof 2005; significantly, it is mostly
devoted to replies to objections.
3
Wright is actually talking about a “full-blown Cartesian” version of the inner-sense theory, but
it is clear that he thinks that the sensible-sounding contemporary version propounded by Armstrong
and Churchland is equally hopeless. For similar dismissals of the inner-sense theory see the
quotations from Davidson and Burge in Boghossian 1989: 17.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. AGAINST INNER SENSE 

sense.” According to the object perception model, as Shoemaker calls it, inner
sense is like ordinary (visual) perception in the following four respects. It:
(a) “provides one with awareness of facts . . . by means of awareness of
objects.” In the visual case: “I am aware (I perceive) that there is a book
before me by perceiving the book”;
(b) “affords ‘identification information’ about the object of perception. When
one perceives one is able to pick out one object from others, distinguishing
it from the others by information, provided by the perception, about both
its relational and nonrelational properties”;
(c) “standardly involves perception of . . . intrinsic, nonrelational properties.”
In the visual case: “[t]o perceive that this book is to the right of that one
I must perceive, or at least seem to perceive, intrinsic properties of the two
books, e.g. their colors and shapes”;
(d) allows its objects to be “objects of attention. Without changing what one
perceives, one can shift one’s attention from one perceived object to
another, thereby enhancing one’s ability to gain information about it”
(1994: 205–6).
Shoemaker offers a battery of related objections against the object perception
model, and concludes that it is thoroughly mistaken. However, as he goes on to
emphasize, the failure of the object perception model does not dispose of the idea
that we detect our mental states by means of some kind of causal mechanism.
More specifically, they do not impugn what Shoemaker calls the broad perceptual
model, the “core stereotype” of which

consist[s] of two conditions, one of them (call it the causal condition) saying that our
beliefs about our mental states are caused by those mental states, via a reliable belief-
producing mechanism, thereby qualifying as knowledge of those states and events, and
the other (call it the independence condition) saying that the existence of these states and
events is independent of their being known in this way, and even of there existing the
mechanisms that make such knowledge possible. (1994: 224–5)

The broad perceptual model is basically equivalent to what we have been calling
detectivism (section 1.4). Shoemaker takes Armstrong to simply be defending the
broad perceptual model, rather than pressing any especially close comparison with
senses like vision. Why insist on the label ‘perceptual,’ though? Shoemaker observes
that the object perceptual model does not fit many paradigm cases of perception:

The sense of smell, for example, does not ordinarily put one in an epistemic relation to
particular objects about which it gives identification information. Smelling a skunk does
not put one in a position to make demonstrative reference to a particular skunk, and there
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

is no good sense in which it is by smelling a particular skunk that one gets the information
that there is, or has been, a skunk around (for one thing, nothing in what one smells tells
one that one skunk rather than several is responsible for the smell). Even in the case of
normal human vision, my conditions (a)–(d), those that are distinctive of the object
perception model, do not always hold. I may see motion in the periphery of my field of
vision without perceiving any of the intrinsic features of the moving object, and without
gaining any “identification information” about it. Moreover, in applying the notion of
perception to animals of other species we seem willing to count as perception a means of
obtaining information about the environment that is not keyed to particular items in the
environment—e.g., a detector in fish that is sensitive to the oxygen level in the water, or
the ability to sense that there are predators of some sort about.
(1994: 222–3, conditions relettered)

For the reasons Shoemaker gives, it is not necessary that the inner-sense theory
conform to the object perception model. But it does not follow that conforming
to the “broad perceptual model” is sufficient for being an inner-sense theory. And
it isn’t: section 1.4 pointed out that Ryleanism is a detectivist theory—that is, it
conforms to the broad perceptual model.4 According to the Rylean, my know-
ledge of my own mind is obtained by directing my faculty for third-person
mindreading onto myself. I find out that you believe that it’s raining by observing
what you say and do, and I find out that I believe that it’s raining by the same
method. My belief that you believe that it’s raining is caused by the fact that you
believe that it’s raining, via your sayings and doings; similarly, my belief that
I believe that it’s raining is caused by the fact that I believe that it’s raining, so
Shoemaker’s causal condition is met. And (we may suppose) my believing that
it’s raining is not dependent in any exciting way on my possessing a third-person
mindreading faculty, so the independence condition is met also.
The causal and independence conditions are thus too weak: they are jointly
insufficient for the inner-sense theory. At least one missing condition is extrava-
gance, which automatically comes with the idea that there is a distinctive per-
ceptual faculty responsible for self-knowledge. Ryleanism is economical, not
extravagant, because its method of self-knowledge is redeployed from another
domain, namely the minds of others (section 1.4). So a better label than ‘the
broad perceptual model’ is extravagant detectivism. Since Shoemaker clearly did
not intend Ryleanism to be a version of the inner-sense theory, let us make this
adjustment henceforth: the broad perceptual model is extravagant detectivism.
The mechanism of inner sense is specialized for self-knowledge—it is not
deployed elsewhere.
Now on to eight objections.

4
As discussed in section 1.3.6, Ryle himself may well not be a Rylean.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. AGAINST INNER SENSE 

2.2.2 Objection 1: inner sense can’t detect extrinsic properties (Boghossian)


In “Content and Self-Knowledge” (1989), Boghossian argues that the “apparently
inevitable” thesis of externalism about mental content leads to the absurd con-
clusion that “we could not know our own minds” (5), thus presenting a paradox,
which he leaves unresolved.5 Part of Boghossian’s case involves ruling out inner
sense (or “inner observation”) as a source of self-knowledge. And “inner obser-
vation” is construed quite loosely, along the lines of Shoemaker’s “broad percep-
tual model”:
It makes no difference to the argument of this paper if you think of inner observation as
amounting to traditional introspection, or if you think of it as amounting to the operation
of some Armstrong-style ‘brain scanner’. What is crucial to inner observation models of
self-knowledge is the claim that beliefs about one’s own thoughts are justified by the
deliverances of some internal monitoring capacity, much like beliefs about the external
environment are justified by the deliverances of an external monitoring capacity (percep-
tion). (1989: 23–4, n. 1)

According to externalism, the property of believing that p (for many fillings for ‘p’)
is extrinsic or, in Boghossian’s terminology, relational. For instance, recalling
Putnam’s Twin Earth thought experiment (Putnam 1975), two individuals may
be intrinsically or internally just alike, with only one having the property of
believing that water is wet. According to Boghossian, externalism and the inner-
sense theory are incompatible because
you cannot tell by mere inspection of an object that it has a given relational or extrinsic
property. This principle is backed up by appeal to the following two claims, both of which
strike me as incontestable. That you cannot know that an object has a given relational
property merely by knowing about its intrinsic properties. And that mere inspection of an
object gives you at most knowledge of its intrinsic properties. (1989: 15–16)

To this it might be objected that one can tell by “mere inspection” that a dime one
is holding has the extrinsic property of being worth ten cents. Boghossian replies
that this is not mere inspection, because
the process by which we know the coin’s value is . . . inference: you have to deduce that the
coin is worth ten cents from your knowledge of its intrinsic properties plus your
knowledge of how those intrinsic properties are correlated with possession of monetary
value. And our knowledge of thought is not like that. (1989: 16–17)

5
In fact, Boghossian’s “apparently inevitable” thesis is weaker (1989: 14–15), but that raises
complications that can be ignored here.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

Boghossian’s principle, then, is that if one perceives (only) an object o, and has no
relevant background information, one cannot thereby come to know that o is F,
where Fness is an extrinsic property of o. That principle may well be questioned,
but since the more instructive objections are elsewhere, let us grant it.6
Boghossian mostly discusses “thoughts,” but the paradox is supposed to cover
“standing states” like belief (21); it is simpler just to consider this case. Suppose a
person S believes that p. We are trying to use Boghossian’s principle to show that,
given the inner-sense theory and externalism, S cannot know (by using her inner
sense) that she believes that p. What is the object o, and the extrinsic property
Fness, to which we can apply Boghossian’s principle? Since the principle delivers
the result that S cannot know that o is F, the proposition that o is F should be
equivalent to the proposition that S believes that p, or else entailed by it. Given
this constraint, there are only two serious pairs of candidates. The first pair is the
obvious one: o=S, and Fness=believing that p. The second identifies o with “S’s
belief that p,” taken to be a psychological particular (as opposed to the state/
property of believing that p, or the fact that S believes that p), and Fness with
the property of having the content that p (or, better, being a belief with the
content that p). Let us consider this second pair first.
Granting for the sake of the argument that there are such particulars as “S’s
belief that p” (what some philosophers would call ‘a token belief ’), and that we
can become introspectively aware of them, they do not present themselves as
anything other than the beliefs that they are.7 Gazing into my own mind, I do not
espy an item—which is in fact my belief that p—that I might misidentify as
another belief, let alone as a hope, intention, or pang of jealousy. As Shoemaker
puts it (writing before the Putin era):
I am aware that I believe that Boris Yeltsin is President of Russia. It seems clear that it
would be utterly wrong to characterize this awareness by saying that at some point
I became aware of an entity and identified it, that entity, as a belief that Boris Yeltsin
holds that office. (1994: 213)

6
Presumably “perceiving (only) an object o,” for the purposes of the principle, includes cases
which would be naturally classified as those of “seeing a single object”; for example, seeing a dime
against a uniformly colored background. But in such a case one can (apparently) come to know,
without appeal to supplementary premises, that the dime has a variety of extrinsic properties—for
example, that it is moving/partly in shadow/tilted to the left. More importantly, the science of
perception gives us no reason to suppose that Boghossian’s principle (at least in the unqualified
version stated in the text) is correct.
7
If S believes that p, there are three (relatively) uncontroversial relevant entities: S, the state/
property of believing that p, and the fact that p (which is assumed in this book to be the true
proposition that p). What is unclear is whether we need to recognize an additional entity, “S’s belief
that p.” It may yet prove to be the fact that S believes that p in disguise, although a defense of this
view has to engage arguments to the contrary (for instance, those in Moltmann 2013: ch. 4).
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. AGAINST INNER SENSE 

Of course, this does not mean that I cannot mistakenly take myself to believe
that Yeltsin is President, just that this cannot take the form of misidentifying, say,
my belief that Gorbachev is President or my hope that Yeltsin is President for
my belief that Yeltsin is President.
Now objects perceptible by vision—moreover, by one’s outer senses in general—
invariably admit of misidentification: looking at a dime, I might misidentify it as a
penny, or as a silver ellipse, or whatever. And Boghossian’s principle is apparently
motivated by considering humdrum cases of vision. So the right conclusion to
draw is not that S’s inner sense fails to deliver knowledge that a mental particular
has the extrinsic property of having the content that p, but that so-called inner
sense is quite unlike vision, and indeed quite unlike any outer sense. In particular, it
does not admit of misidentification (at least when trained on one’s beliefs); given
this difference, there is no evident reason to take Boghossian’s principle to apply to
inner sense.
Boghossian’s objection fails, then, if o is taken to be S’s belief that p. What
about the first pairing: o=S, and Fness=believing that p? Here at least there is no
doubt about the existence of the object but, just like a “token belief,” it seems
equally invisible to anything resembling ordinary external perception. As Hume
famously pointed out, “when I enter most intimately into what I call myself,”
looking for the subject of my mental states, I don’t find him (1740/1978: 252). Put
more cautiously, if inner sense affords me awareness of myself, it is quite unlike
any outer sense. The earlier point about the absence of misidentification applies
even more clearly here. As Shoemaker puts it, “if I have my usual access to my
hunger, there is no room for the thought ‘Someone is hungry all right, but is it
me?’” (1994: 211).
Boghossian’s objection also fails for the second pairing, and so fails to under-
mine the inner-sense theory.

2.2.3 Objection 2: inner sense is like clairvoyance (Cassam)


Cassam sets out Boghossian’s paradox in the form of a trilemma:
[K]nowledge of our own attitudes can only be:
1. Based on inference.
2. Based on inner observation.
3. Based on nothing. (Cassam 2014: 141)

According to Boghossian, there are serious objections to each of the three


options, hence the paradox. Cassam, however, sees no paradox. He argues against
(2) and (3), concluding that “inferentialism is the only game in town” (141).
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

Cassam’s case against (2) starts by acknowledging that Armstrongian self-


scanning is “by far the best bet” (132) for the inner-sense theorist. But the
problem is that Armstrong’s view

makes introspection out to be fundamentally no different, epistemologically or phenom-


enologically, from clairvoyance. What I mean by clairvoyance is the kind of thing
Laurence BonJour has talked about over the years. For example, there is the case of
Norman who, for no apparent reason, finds himself with beliefs about the President’s
whereabouts; Norman has neither seen nor heard that the President is in New York but
believes that the President is in New York, and his belief is reliable. Even if it’s right to
describe Norman as ‘knowing’ that the President is in New York his knowledge is very
different from ordinary perceptual knowledge. When you know that the President is in
New York by seeing him in New York you are aware of the President, and your knowing
requires a degree of cognitive effort, even if it’s only the minimal effort of looking and
paying attention to where the President is. In contrast, Norman is not aware of the
President and his belief is not the result of any cognitive effort on his part. The belief that
the President is in New York simply comes to him; he has no justification for believing
that the President is in New York and no idea why he believes this or how he could
possibly know where the President is.
. . . perceptual knowledge is strikingly different from clairvoyant ‘knowledge’ so it
doesn’t exactly help the perceptual model to say that self-knowledge is like clairvoyance.
If perceptual knowledge that P requires you to be conscious that P as well as a degree of
cognitive effort then the net effect of admitting that Norman’s knowledge that he believes
the President is in New York is epistemologically and phenomenologically on a par with
his knowledge that the President is in New York is to suggest that his self-knowledge is not
perceptual. It lacks some key features of ordinary perceptual knowledge and only comes
out as ‘perceptual’ on an impoverished view of perceptual knowledge which has very little
going for it. (2014: 134–6; footnote omitted)

As Shoemaker observed, Armstrongian self-scanning doesn’t fit the stereotype of


perception—in particular, it’s not a version of the object perception model.
Cassam is in effect reinforcing one of the morals of the previous discussion of
Boghossian, that the label ‘broad perceptual model’ is not particularly apposite
either. Admittedly, Armstrongian self-scanning has one significant commonality
with vision, olfaction, and the rest—namely, it is an extravagant detection
mechanism—but in many other respects it is so unlike standard perceptual faculties
that it is doubtfully classified as a form of perception at all. Armstrongian
self-scanning, let us grant, is not a form of “inner observation,” taking that phrase
as seriously as Cassam intends. We may also assume it does not fall under (1) or
(2). So Cassam has, in effect, shown that another option needs to be added to
the list:

4. Based on Armstrongian self-scanning.


OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. AGAINST INNER SENSE 

Although Cassam floats the possibility that Armstrongian self-scanning can’t


deliver knowledge, he does not pursue this line of objection, leaving little reason
to reject (4).8 So the inner-sense theory—at least in its Armstrongian version—
survives.
2.2.4 Objection 3: inner sense is incompatible with infallibility
An avowal, in the terminology of Wright 2000, is a “non-inferential self-ascription”;
phenomenal avowals “comprise examples like ‘I have a headache’, ‘My feet are sore’,
‘I’m tired’, ‘I feel elated’, ‘My vision is blurred’, ‘My ears are ringing’, ‘I feel sick’, and
so on” (14).9 According to Wright, these self-ascriptions have three distinctive
marks, the first10 of which is that
they are strongly authoritative. If somebody understands such a claim, and is disposed
sincerely to make it about themselves, that is a guarantee of the truth of what they say.
A doubt about such a claim has to be a doubt about the sincerity or the understanding of
the one making it. Since we standardly credit any interlocutor, in the absence of evidence
to the contrary, with sincerity and understanding, it follows that a subject’s actually
making such a claim about themselves is a criterion for the correctness of the corres-
ponding third-personal claim made by someone else: my avowal that I’m in pain must be
accepted by others, on penalty of incompetence, as a ground for the belief that I am.
(2000: 14–15)

Wright certainly has a point. If I go to the doctor complaining of ringing in the


ears, the doctor may well correct me on my diagnosis of excessive earwax, but she is
most unlikely to question my symptoms. And if it is impossible to falsely believe
that my ears are ringing, then the inner-sense theory—at least as applied to
phenomenal states, the conditions phenomenal avowals ascribe—cannot be right.
Detection mechanisms, no matter how well constructed, are invariably subject to false
positives: smoke detectors can squawk ‘Smoke!’ in the absence of smoke, thermostats
can misread the temperature, visual systems can get colors, shapes, and motions
wrong. Someone who relies on a smoke detector, or a thermometer, or her visual
system cannot be completely immune from false beliefs about smoke, temperature, or
the colors, shapes, and motions of objects in her environment.11

8
For discussion of BonJour’s clairvoyance example (and related examples), which could be
co-opted by a defender of Armstrongian self-scanning, see Lyons 2009: ch. 5.
9
Wright’s official explanation of avowals has them also being “authoritative” (14). He does
indeed think they are authoritative (see Wright 2000 quoted in the main text), but it is clear that the
definition of ‘avowal’ is intended to be neutral on this point.
10
The three marks will be discussed in a different order than Wright’s.
11
This point—that there is a conflict between the inner-sense theory and a claim like Wright’s—
was recognized very early on (e.g. Armstrong 1963: 419); Wright does not press the present
objection, however.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

The issue is the status of infallible access, briefly mentioned in Chapter 1


(section 1.3.2), appropriately restricted to avowals. Wright’s claim of strong
authority can be put as the following version of infallible access:
INA Necessarily, if S avows, with sincerity and understanding, ‘I am in
phenomenal state M,’ then she is in M.
If INA is true, the inner-sense theory is false. So, is it true? One qualification
should be made at the outset: as Wright notes (2000: 15), there are situations in
which my claim to have sore feet is false. Suppose I am waking up after surgery
that (unbeknown to me) amputated my frostbitten feet. I feel soreness in
my “phantom feet,” and so mistakenly claim to have sore feet. Clearly this only
affects the letter and not the spirit of INA, since my revised claim to feel soreness
in my apparent feet would not be questioned. Hence cases like this, where the
claim turns out to be false because a body part is misidentified, can be excluded.
Despite the initial appeal of INA, it does not hold up under closer scrutiny.
First, there are everyday counterexamples. Someone who suffers from ringing in
the ears (tinnitus) might be cured of her condition. Fearful that it might recur,
when she hears buzzings and hissings in her external environment, which are in
fact readily discriminable from the apparent buzzing and hissing “in the ear”
distinctive of tinnitus, she is disposed to judge that her condition has recurred,
and that her ears are again ringing. When she realizes that the buzzings and
hissings are produced by a nest of bees and a knot of snakes, she withdraws her
judgment and admits to an error. Reflecting on my recent trip on a rollercoaster,
I may conclude that my shouted avowal of elation while rattling down a steep
incline was actually a response to fear; doubting my previous judgment is hardly a
“sign of incompetence.” Indeed, my conclusion might well be right, although of
course in practice the issue is very hard to decide. Yet again, a hypochondriac
who constantly (and sincerely) complains of sore feet could be reasonably
suspected of unwittingly exaggerating her ordinary sensations of pressure.
Second, there are pathological cases. Naively, one would not have thought it
possible for someone to believe that he is dead, or that what is manifestly his own
hand is the hand of someone else, or (of particular relevance to the present issue)
that he can see even though he is in fact blind, or hear even though he is deaf. But
such cases are actual.12 There appear to be very few limits on the absurd things
people can believe, given the right sort of neurological damage.

12
The belief that one is dead is associated with Cotard’s syndrome (Debruyne et al. 2009). Denial
of ownership of part of one’s body is somatoparaphrenia (Ardila 2016). Blindness denial is Anton’s
syndrome, which also comes in an auditory version (Boylan et al. 2006).
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. AGAINST INNER SENSE 

A third and final objection to INA appeals to the sort of examples given by
Burge in “Individualism and the Mental” (1979), which he used to argue for a
social version of content externalism, on which a person’s “mental contents
[may] differ, while his entire physical and non-intentional mental histories,
considered in isolation from their social context, remain the same” (106). Burge’s
most famous example concerns ‘arthritis’ but, as he notes, words of other kinds
would have done just as well, for instance color terms:

People sometimes make mistakes about color ranges. They may correctly apply a color
term to a certain color, but also mistakenly apply it to shades of a neighboring color.
When asked to explain the color term, they cite the standard cases (for ‘red’, the color of
blood, fire engines, and so forth). But they apply the term somewhat beyond its conven-
tionally established range—beyond the reach of its vague borders. They think that fire
engines, including that one, are red. They observe that red roses are covering the trellis.
But they also think that those things are a shade of red (whereas they are not). Second
looks do not change their opinion. But they give in when other speakers confidently
correct them in unison. (1979: 100)

Imagine someone, Scarlett, who misapplies ‘red’ as Burge describes. Scarlett says
‘That is red,’ looking at and demonstrating a reddish-orange carrot in good light.
Her utterance expresses her belief that the carrot is red; the carrot is orange, not
red, so she speaks and believes falsely. She also says ‘That looks red to me,’
looking at and demonstrating the carrot. Her utterance expresses her belief that
the carrot looks red to her; the carrot does not look red to her, but rather reddish-
orange, so she speaks and believes falsely. Since ‘That looks red to me’ should
evidently be included in Wright’s list of phenomenal avowals, this is a counter-
example to this instance of INA:
INA(RED) Necessarily, if S avows, sincerely and with understanding, ‘x looks
red to me,’ then x does look red to her.
Scarlett avows, sincerely and with understanding, that the carrot looks red to her,
but it doesn’t.
It might be replied that this is not a counterexample, on the grounds that
Scarlett does not avow ‘That looks red to me’ with “understanding.” After all,
Burge himself comments that this style of thought experiment crucially relies on
the possibility of “attribut[ing] a mental state or event whose content involves a
notion that the subject incompletely understands” (1979: 107). In an alternative
formulation, the thought experiment relies on “someone’s having a propositional
attitude despite an incomplete mastery of some notion in its content” (111).
However, as these quotations indicate, Burge’s “incomplete understanding” is
not, in the first instance, incomplete understanding of a word. Burge glosses his
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

usage as follows: “Understanding the notion of [redness] comes roughly to


knowing what [redness] is” (102).13 Now there is an intuitive sense in which
Scarlett “doesn’t know what redness is,” but it seems compatible with her
understanding the word ‘red,’ and it is this familiar kind of linguistic under-
standing that figures in INA(RED). By everyday standards, Scarlett does understand
‘red’—if she didn’t, there would be little motivation to agree (with Burge) that her
utterance of ‘Fire engines are red’ expresses her belief that fire engines are red.
Does she, perhaps, only “incompletely” understand ‘red’? It is hard to make that
charge stick. Scarlett knows that ‘Fire engines are red’ means that fire engines are
red, that ‘red’ applies to an object iff it’s red, and so forth. We may even imagine
that Scarlett is a professional translator of books on color written in Chinese into
English, renowned for her sensitivity to the nuances of English color terminology.
It is somewhat strained to insist that she nevertheless understands ‘red’ only
“incompletely.”
Admittedly, Scarlett is wrong about certain visually presented shades of reddish-
orange—she thinks they are shades of red. Put in the material mode, she doesn’t
know what redness is; put in the formal mode, she mistakenly applies ‘is a shade of
red’ to some reddish-orange shades. But of course misapplications of the word ‘X’
stemming from failure to know what X is do not thereby disqualify one from
knowing what ‘X’ means, on pain of word-understanding being vanishingly rare.
One can confusedly think that water is an element and yet understand ‘water’
perfectly well.
Trying to effect a repair by qualifying ‘understanding’ with ‘complete’ is
ineffective, and the counterexample to INA(RED) (and so to INA) stands. We should
seek another replacement for INA(RED), one also incompatible with the inner-
sense theory, but this time immune to Scarlett-type counterexamples.
In fact, the example of Scarlett immediately points to such a replacement.
Scarlett’s word ‘red’ only denotes the property redness because Scarlett is embed-
ded in a linguistic community which includes people who do not “make mistakes
about color ranges.” That, of course, is the very feature of Burge’s examples that
enables him to draw the externalist conclusion: keeping the subject internally
constant, we can change her linguistic community and thereby change what she
believes.
Now those who are not mistaken about color ranges are precisely those
who “fully understand” the notion red. That is, they are those who “know what

13
The quotation actually concerns a different example, “the notion of contract,” but Burge is
illustrating a general claim.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. AGAINST INNER SENSE 

redness is.” Let us, then, simply amend INA(RED) by adding that S knows what
redness is:
INA(red)* Necessarily, if S knows what redness is, and avows, sincerely and
with understanding, ‘x looks red to me,’ then x does look red to her.
Clearly INA(red)* is immune to Scarlett-type counterexamples, since such people
falsify the antecedent. It is also incompatible with the inner-sense theory. None-
theless, it is false, or at any rate there is no obvious reason to suppose it true.
I may know that water is H2O (and so in that sense know what water is), but that
is compatible with my making a mistake, and saying sincerely and with under-
standing, ‘Water contains nitrogen.’ (Thinking back on what I said, I may well
realize that I blundered.) Similarly, those who know what redness is, and who
typically refuse to apply ‘is a shade of red’ to reddish-orange shades, are not
thereby absolutely incapable of error. They may occasionally become confused,
and misapply ‘is a shade of red,’ just like Scarlett. (Unlike Scarlett, they need not
seek out others to correct their mistake.) And this is apparently as good as it is
going to get: there is no need for Scarlett’s linguistic community to contain
anyone who is infallible for that community’s word ‘red’ to have its usual English
meaning. The fallible users, who know what redness is, are enough.
The avowal ‘x looks red to me’ is nothing special—it is just a convenient
example. There is nothing in the vicinity of strong authority to threaten the
inner-sense theory.

2.2.5 Objection 4: inner sense is incompatible with self-intimation


The near-converse of infallibility is self-intimation (section 1.3.2). A version of
self-intimation restricted to phenomenal avowals is:
A
S-I Necessarily, if S is in phenomenal state M, then S believes/knows that she
is in M (and so, assuming the appropriate linguistic capacities, is dis-
posed to avow ‘I am in M’).
This is (close enough to) another of Wright’s “three marks” of avowals. Like INA,
A
S-I is incompatible with the inner-sense theory. Detection mechanisms are
invariably subject to false negatives: smoke and low temperatures may go
undetected by smoke detectors and thermostats, and colors, shapes, and motions
of objects may go undetected by visual systems.
If S-IA is true, the inner-sense theory is false. So, is it true? First, there seem to
be everyday counterexamples. It is far from absurd to suppose that I failed to
notice ringing in my ears because I was distracted by a charging bull, or that
I failed to notice that something looked green to me because I was attending to
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

the shapes of the objects before my eyes.14 Take a red playing card and slowly
move it to the periphery of your visual field, keeping your fixation point fixed.
Color vision becomes progressively worse the more peripheral the stimulus, and
eventually gives out altogether. So when did the card stop looking red? It’s
surprisingly hard to say.15
Second, there are pathological cases, notably blindsight. Someone with blind-
sight denies that he has visual experience (usually in a certain region of his visual
field). He is not disposed to avow ‘x looks red to me,’ even though he has a
residual visual capacity, which in some cases includes color discrimination. It is at
least arguable that something may look red to him despite his refusal to avow that
it does. Of course, blindsight is a complex and controversial topic, but the point is
simply that pathological cases cannot be dismissed from the armchair.16
Third, the knowledge version of S-IA (obtained by deleting ‘believes’ from
‘believes/knows’) is tantamount to the claim that phenomenal states are (in the
terminology of Williamson 2000: ch. 4) “luminous,” against which Williamson
mounts some compelling arguments. Williamson’s criticisms depend on a feature
of knowledge that is not shared by belief, so a retreat to the belief version of S-IA
is formally on the cards. But it is not very well motivated. To the extent that S-IA is
tempting, it is in its stronger knowledge version (which is the one Wright himself
endorses). To see this, note that the initially tempting thought can be put this
way. If I am in phenomenal state M then my avowal ‘I am in M’ is not merely
true: rather, it is also permissible for me to avow ‘I am in M,’ thereby asserting that
I am in M. But, given the widely accepted thesis that it is permissible to assert
P only if one knows P,17 the initially tempting thought amounts to the knowledge
version of S-IA.
In short: the objection from self-intimation is no better than the objection
from infallibility.
2.2.6 Objection 5: inner sense leads to alienated self-knowledge (Moran)
The previous two objections are hardy perennials, now quite dated; a more recent
objection, due to Moran, is this. The inner-sense theory offers “a picture of
self-knowledge as a kind of mind-reading applied to oneself, a faculty that
happens to be aimed in one direction rather than another.” However, “our

14
Retreating from ‘knows’ to ‘is in a position to know’ doesn’t help much: the bull might well be
so distracting that I am not even in a position to know that my ears are ringing.
15
See Dennett 1991: 53–4, Hansen et al. 2009.
16
For an overview, see Overgaard and Mogensen 2015.
17
See Williamson 2000: ch. 11.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. AGAINST INNER SENSE 

ordinary self-knowledge [is] different from this sort of self-telepathy” (Moran


2001: 91). In particular:
[I]n ordinary circumstances a claim concerning one’s attitudes counts as a claim about
their objects, about the world one’s attitudes are directed on . . . the expression of one’s
belief carries a commitment to its truth. (92)

That is, if in ordinary circumstances I say ‘I believe that it’s raining,’ I am not
disinterestedly reporting on the mental states of someone who happens to be me.
I am also committed to the meteorological hypothesis that it’s raining. To put the
point in terms familiar from Moore’s paradox, I am not prepared to follow up my
psychological report with ‘ . . . but it isn’t raining.’ As Moran notes, in special
circumstances I can do that—perhaps one sunny day my therapist convinces me
that my obsessive umbrella-carrying is best explained by the hypothesis that
I believe that it’s raining.18 At the therapist’s, I may say ‘I believe that it’s raining’
without “avow[ing] the embedded proposition . . . itself” (85). But this is quite
atypical: a worry suggested by Moran’s remarks seems to be that all knowledge
of one’s beliefs would be of this “noncommittal” sort, if the inner-sense theory
were correct.
This worry, at least, is readily defused. Let us say that S’s belief that p is
alienated just in case the belief is to a significant extent inferentially isolated—
in particular, it is not expressible by S in unembedded speech—and that this
inexpressibility would persist even if S is convinced that she does believe that
p (perhaps by a persuasive therapist). Thus S might assert that she believes that p
without “avowing the embedded proposition itself.” Suppose now that S’s belief
that it’s raining is unalienated—in particular, S asserts that it’s raining when
queried, and the belief functions in the usual way to guide her present action and
future planning. Further suppose that the inner-sense theory is true, and that S’s
faculty of self-telepathy delivers the verdict that she believes that it’s raining.
Precisely because the belief detected is unalienated, her claim that she has this
belief will carry “a commitment to its truth”—she will not say ‘Fancy that,
I believe that it’s raining! I wonder if that belief of mine is true?’19
A less straightforwardly answered worry, perhaps closer to what Moran has in
mind, is that if the inner-sense theory is correct, one could sometimes discover
one had the (alienated) belief that it’s raining by using one’s inner eye, and so
without appeal to behavioral (or imagistic or affective) clues. Indeed, presumably
one could sometimes discover this while also having the unalienated belief that it

18
Another—especially cute—example is in Crimmins 1992.
19
Cf. Cassam 2014: 157.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

isn’t raining. In fact, this kind of thing never seems to happen. If someone cannot
see any apparent signs suggesting that she believes that p, we will not find her
saying ‘I believe that p but not-p.’ So the inner-sense theorist owes us an
explanation of why not. Presumably the explanation lies in the large functional
difference between alienated and unalienated beliefs, but the details will have to
wait until more is known about the mechanism of inner sense. The inner-sense
theorist has to issue a promissory note here, but there is no obvious reason to
suppose that it cannot be redeemed.

2.2.7 Objection 6: inner sense cannot explain first-person authority


(Finkelstein)
Finkelstein (2003) gives an argument against the “new detectivists”—inner-sense
theorists like Armstrong and Churchland—that has some similarity to Moran’s
complaint. Finkelstein mentions belief, but his chief example is anger. He begins
by drawing a distinction between “conscious” and “unconscious” anger (20–2).
Finkelstein is not completely explicit about what this distinction comes to, but
a clear case of the former is when Jim, after discovering that Pam stole his
paper clips, feels angry: he is flushed, his heart rate rises, the unjust theft of his
paper clips occupies his attention. A clear case of the latter is when Jim, on the
therapist’s couch, realizes that his pattern of behavior toward Pam over the last
few months is best explained by the hypothesis that he is angry at her for getting
engaged to someone else. (The conscious/unconscious distinction is thus close to
the unalienated/alienated distinction just discussed.)
Next, Finkelstein says the following about “first-person authority”:

There’s an asymmetry between speaking about someone else’s anger and speaking about
one’s own. I am about to ascribe mental states to myself responsibly without being able to
cite evidence in support of the ascriptions. This is a central feature of first-person
authority. (2003: 21)

The mechanism of inner sense, as naturally understood, simply delivers know-


ledge of one’s mental life—it does not deliver evidence about some other subject
matter, on which knowledge of one’s mental life could be based. (What could that
other subject matter possibly be?) The inner-sense theory is admirably placed,
then, to explain Finkelstein’s “central feature” of first-person authority. Not
according to Finkelstein, however:

In order for me to speak with first-person authority about some mental state, it’s not
enough that I know about it; it must be conscious. What you [the inner-sense theorist] can
explain, however, simply by positing a mechanism that enables us to detect our own states
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. AGAINST INNER SENSE 

of mind is, at most, our knowledge of them. You cannot thereby explain how it is that we
come to be consciously angry (or afraid, or intending to visit Paris . . . ). And since merely
knowing one’s own state of mind is compatible with not having first-person authority
about it, you cannot explain first-person authority either. (2003: 22)

(For convenience we may take Finkelstein’s central feature—inability to cite


evidence—to be equivalent to first-person authority.) The argument in this
passage can be set out as follows:
P1. One has first-person authority about one’s anger only if one’s anger is
conscious.
P2. The inner-sense theory cannot explain why one’s anger is conscious.
C. The inner-sense theory cannot explain why one has first-person author-
ity about one’s anger.
A true theory may explain less than hoped, so could the inner-sense theorist
simply accept C? No, because (as noted above) the inner-sense theory, if true,
does seem to explain first-person authority. So the inner-sense theory is in
difficulty if the argument establishes C. Moreover, P1 is at least defensible,
and P2 should be uncontroversial. The inner-sense theorist is not in the business
of explaining why some states are conscious, just as she is not in the business of
explaining why some states are unalienated. Fortunately for the inner-sense
theorist, however, the conclusion has no support from the premises.
Consider the following analogy. Sometimes one can ascribe colors to nearby
tomatoes responsibly without being able to cite evidence in support of the
ascriptions. As we can put it, sometimes one has “third-person authority”
about the color of tomatoes. Can the “theory of vision” explain this? Here is an
argument, parallel to Finkelstein’s, for the conclusion that it can’t:
P1*. One has third-person authority about the color of a tomato only if the
tomato is illuminated.
P2*. The theory of vision cannot explain why the tomato is illuminated.
C*. The theory of vision cannot explain why one has third-person authority
about the color of a tomato.
At least for something that deserves to be called ‘the theory of vision,’ the
conclusion is false and the premises are true. The argument is therefore invalid
and, further, the premises don’t even suggest that the conclusion is true.
In general, explaining X is one thing; explaining Y, a necessary condition of X,
is quite another: H may explain X without explaining Y. Finkelstein’s argument
is equally defective, and so fails to threaten the inner-sense theory.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

2.2.8 Objection 7: the deliverances of inner sense


are not baseless (McDowell)
Finkelstein’s “central feature” of first-person authority is similar to the last of
Wright’s three distinctive marks of phenomenal avowals:
[T]hey are groundless. The demand that somebody produce reasons or corroborating
evidence for such a claim about themselves—‘How can you tell?’—is always inappropri-
ate. There is nothing they might be reasonably be expected to be able to say. In that sense,
there is nothing upon which such claims are based. (2000: 14)

One of the attractions of the inner-sense theory, Wright says, is that it seems to
explain (inter alia) groundlessness:
As an analogy, imagine somebody looking into a kaleidoscope and reporting on
what he sees. No one else can look in, of course, at least while he is taking his turn. If
we assume our hero to be perceptually competent, and appropriately attentive, his claims
about the patterns of shape and colour within will exhibit [an analog of this mark of]
phenomenal avowals:
(i) The demand that he produces reasons or corroborating evidence for his claims will
be misplaced—the most he will be able to say is that he is the only one in a position
to see, and that is how things strike him. (2000: 22–3)20

In a few places (16, 23) Wright suggests—or can be read as suggesting—that


the groundlessness of avowals comes to the same thing as their being “non-
inferential.” McDowell complains, in effect, that Wright’s “groundlessness”
conflates what McDowell (2000) thinks are two distinct claims: that avowals
are non-inferential, and that they are (in McDowell’s terminology) “baseless.”
Once these two claims are separated, McDowell argues, it is clear that the inner-
sense theory is not as initially attractive as Wright makes out.21 Specifically, it
gets the non-inferential nature of self-knowledge right, but fails to accommo-
date its baselessness:
[T]he authority of observations is indeed non-inferential. But it is precisely not baseless.
The question ‘How can you tell?’ is precisely not excluded as inappropriate. Wright says of
his hero, alone in looking into his kaleidoscope, that ‘the most he will be able to say is that

20
See also Wright 2001a. In the quoted passage, Wright is talking about the industrial-strength
Cartesian version of the inner-sense theory, which endorses infallibility and self-intimation. But it is
clear that he thinks that the Armstrong/Churchland version (i.e. what this chapter calls ‘the inner-
sense theory’) would also explain groundlessness.
21
Following Wright, McDowell is talking about “the ‘Cartesian’ conception [of inner sense]
attacked by Wittgenstein,” but the point applies equally to the Armstrong/Churchland conception.
(See footnote 20.)
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. AGAINST INNER SENSE 

he is the only one in a position to see, and that is how things strike him’. But if he can say
that much, he can say too much for the supposed explanation of the epistemic asymmetry
even to seem to be any good. (2000: 48)

McDowell’s argument is disarmingly simple. If one knows something by percep-


tion, the question ‘How can you tell?’ (that is, ‘By what means or method can you
tell?’) is easily answered. I know that the sock has a hole in it by feeling the hole,
I know that the wine is corked by tasting the wine, and so on. Contrast, however,
the question ‘How can you tell you have a headache?’, asked in a typical situation
in which I am gulping down some aspirin. To that, there appears no obvious
answer. Hence I do not know that I have a headache by any kind of perception.
Now it is not right that the question ‘How can you tell?’, asked about an
avowal, is always “excluded as inappropriate.” Even in the headache example,
I might reply (albeit rather unhelpfully) ‘By feeling a throbbing pain in my
forehead.’ And, to take another of Wright’s examples of phenomenal avowals,
one might say that one knows one’s vision is blurry by looking at the newspaper.
Similar points hold for what Wright calls “attitudinal avowals” (2000: 15) like
‘I am frightened of that dog’ and ‘I am thinking of my mother,’ which McDowell
implicitly suggests are likewise groundless. ‘I know I am frightened of that dog by
feeling my pounding heart and clammy palms’ and ‘I can tell I am thinking about
my mother because I am imagining her starched pinafore’ are, although rather
unusual and no doubt debatable, perfectly in order.
Still, none of this is much comfort to the inner-sense theorist. These answers
do not employ any verb of inner sense, because there isn’t one. No “untheoretical
person,” in Ryle’s phrase, ever says that she knows that the newspaper looks out
of focus, or that she believes it’s raining, or that she wants a beer, by introspecting,
apperceiving, perceiving, sensing, or observing these mental states.
However, once we bear in mind Shoemaker’s point that the inner-sense theorist
could and should insist that inner sense is quite unlike paradigmatic outer senses
(section 2.2.1), McDowell’s argument basically amounts to a reminder of this fact,
and so is not a refutation of the inner-sense theory. The ordinary person is in a
position to know that she has specialized organs for the detection of, respectively,
light and sound: she needs to cock her head to hear more, squint to avoid the sun’s
glare, and so on. In contrast, if there are any mechanisms for the detection of time
or the position of one’s limbs, they are hidden.
2.2.9 Objection 8: inner sense implies possibility
of self-blindness (Shoemaker)
In a number of papers, Shoemaker has developed an argument against the inner-sense
theory that simultaneously serves as an argument for his own view. According
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

to Shoemaker, “there is a conceptual, constitutive connection between the exist-


ence of certain sorts of mental entities and their introspective accessibility” (1994:
225). This is “a version of the view that certain mental facts are ‘self-intimating’
or ‘self-presenting’, but a much weaker version” than the strong view associated
with Descartes.
One way in which it is weaker is that Shoemaker does not think that believing
that p entails believing that one believes that p. Taken out of context, he can be
read that way (“it is of the essence of many kinds of mental states and phenomena
to reveal themselves to introspection” (242)22), but his view is that the entailment
only goes through if other conditions are added. As he puts it, “believing that one
believes that p can be just believing that p plus having a certain level of rationality,
intelligence, and so on” (244).
Shoemaker’s argument against the inner-sense theory is that it predicts the
possibility of a condition analogous to ordinary blindness, deafness, ageusia (loss
of taste), and so on, which Shoemaker calls self-blindness:
To be self-blind with respect to a certain kind of mental fact or phenomenon, a creature
must have the ability to conceive of those facts and phenomena (just as the person who is
literally blind will be able to conceive of those states of affairs she is unable to learn about
visually) . . . And it is only introspective access to those phenomena that the creature is
supposed to lack; it is not precluded that she should learn of them in the way others might
learn of them, i.e., by observing her own behavior, or by discovering facts about her own
neurophysiological states. (1994: 226)23

The blind are as rational, intelligent, and conceptually competent as the rest of
us—they merely lack a particular mechanism specialized for detecting states of
their environment. If the inner-sense theory is right, then the mechanism spe-
cialized for detecting one’s mental states could be absent or inoperative, while
sparing the subject’s rationality, intelligence, and conceptual competence. But,
according to Shoemaker, such self-blindness is not possible.24
Following Shoemaker, let us say that a rational agent is a “person with normal
intelligence, rationality, and conceptual capacity” (236). To say that self-
blindness is impossible is to say that, necessarily, any rational agent has the
sort of privileged and peculiar access to her mental states that we typically enjoy.

22
Shoemaker is of course using ‘introspection’ broadly here, to denote the special method
(perceptual or not) we have of finding out about our own mental states.
23
Self-blindness is anticipated in Geach 1957: 109.
24
It might be argued that someone without an inner sense would lack the concept of belief, as it
might be argued that the blind lack color concepts (cf. Peacocke 1992: 151–62, Shoemaker 1994: 236,
fn. 3). But this highly controversial claim can be set aside here.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. AGAINST INNER SENSE 

Concentrating on beliefs, Shoemaker’s basic strategy is this:


What I shall be arguing, in the first instance, is that if someone is equal in intelligence,
rationality, and conceptual capacity to a normal person, she will, in consequence of
that, behave in ways that provide the best possible evidence that she is aware of her
own beliefs . . . to the same extent as a normal person would be, and so is not self-blind.
(1994: 236)

Suppose that rational agent George is self-blind. One might think that George’s
condition could be easily diagnosed, because he will sometimes say ‘It’s raining
but I don’t believe it is,’ or the like. But, Shoemaker argues, he will not: George’s
rational agency “will be enough to make [him] appreciate the logical impropriety
of affirming something while denying that one believes it” (237). George, then,
will not betray his self-blindness in this way. Might he betray it in some other
way? For instance, wouldn’t George be flummoxed if asked ‘Do you believe that
it’s raining?’ That takes us to step C, below, of Shoemaker’s attempt to reduce to
absurdity the hypothesis that George is self-blind (Shoemaker 1988: 34–45):
A. Self-blind speaker George will recognize the paradoxical character of ‘p but
I don’t believe that p.’25
B. Since George is a rational agent, this recognition will lead him to avoid
Moore-paradoxical sentences.
C. Further, George will recognize that he should give the same answer to ‘Do
you believe that p?’ and ‘p?’
D. Continuing this line of argument: plausibly, there is “nothing in his
behavior, verbal or otherwise, that would give away the fact that he lacks
self-acquaintance” (i.e. the ordinary kind of self-knowledge of one’s beliefs)
(36).
E. If George really is self-blind, “how can we be sure . . . that self-blindness is
not the normal condition of mankind?” (36).
F. “[I]t seems better to take the considerations [above] as a reductio ad
absurdum of the view that self-blindness [with respect to beliefs] is a
possibility” (36).
Shoemaker then briefly argues that this sort of argument can be extended (with
qualifications) to other states (45–8; see also Shoemaker 1994: 237).
As Shoemaker’s intricate discussion amply illustrates, the argument to step
D raises some very difficult and complicated issues, and one might well take it

25
This sort of “omissive” Moore-sentence should be distinguished from the “commissive” sort
that figured in section 2.2.6, namely ‘p but I believe that not-p.’ As noted, the latter sort of sentence is
sometimes assertable.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

to founder somewhere along the way. For instance, consider the following
objection:

[T]here are conceivable circumstances in which the total evidence available to a man
supports the proposition that it is raining while the total third-person evidence supports
the proposition that he does not believe that it is raining . . . If George is self-blind, then in
the envisaged circumstance he is going to be very puzzled. He knows that Moore-
paradoxical sentences are to be avoided. Yet it will seem to him that such an utterance
is warranted by the evidence . . . And now there will be something—namely his expression
of puzzlement—that distinguishes him from the normal person. (Shoemaker 1988: 42)

Shoemaker replies that this case “is not really conceivable”:

There is a contradiction involved in the idea that the total evidence available to someone
might unambiguously support the proposition that it is raining and that the total third-
person evidence might unambiguously support the proposition that the person does not
believe that it is raining. For the total third-person evidence concerning what someone
believes about the weather should include what evidence he has about the weather—and if
it includes the fact that his total evidence about the weather points unambiguously toward
the conclusion that it is raining, then it cannot point unambiguously toward the conclu-
sion that he doesn’t believe that it is raining. (1988: 43)

However, Shoemaker’s reply is incorrect. Suppose I am self-blind, and my


evidence is this: (a) the cat has come indoors soaking wet; (b) the weather forecast
is for rain; (c) I am going out without my umbrella, carrying important papers
that will spoil if it’s raining. This evidence “points unambiguously toward the
conclusion that it is raining”; it also points unambiguously toward the conclusion
that I don’t believe that it is raining—if I knew someone else behaved in this way,
I would reasonably conclude that she does not believe that it’s raining. (Assume
that, somehow, I have determined that I dislike getting wet and ruining import-
ant papers.)
According to Shoemaker, this reasoning goes wrong because the evidence cited
is not my total evidence: “the total third-person evidence concerning what
someone believes about the weather should include what evidence he has about
the weather.” Thus, I have another item of evidence to be weighed in with the rest,
namely that my evidence about the weather is that the cat came in soaking wet,
etc. And adding that item of evidence does indeed undercut the conclusion that
I lack the belief that it’s raining—if I knew that someone else had evidence that
pointed unambiguously toward the conclusion that it’s raining, then even if she
walks out without an umbrella, that would not show that she doesn’t believe that
it’s raining. Rather, it would suggest other hypotheses—perhaps that she doesn’t
know where her umbrella is.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. AGAINST INNER SENSE 

But what is it to “have evidence” about the weather? Given the “E=K” assump-
tion made in section 1.1, it is to know facts relevant to meteorological hypotheses.
Here we can just use something weaker: it is (at least) to believe facts that confirm or
disconfirm meteorological hypotheses. Shoemaker himself must think this, otherwise
the objection he is trying to rebut would be a non-starter. If I don’t believe that the
cat came in soaking wet, etc., it will not “seem to me” that the Moore-paradoxical
sentence ‘It’s raining but I don’t believe that it’s raining’ is true, and so there will be
no “expression of puzzlement” that distinguishes me from the normal person.
Thus, the fact that my evidence includes the fact that the cat came in soaking
wet entails that I believe that the cat came in soaking wet. Hence, to insist that my
total evidence should include facts about what evidence I have about the weather
is tantamount to assuming that I have knowledge (or true beliefs) about what
I believe. But—since I am supposed to be self-blind—this is exactly what cannot
be assumed.
Even if we waive these difficulties in reaching step D, the rest of the argument
is hardly plain sailing. Suppose that step D is secured: George, our allegedly
self-blind man, behaves in every way like a man who has the ordinary sort
of self-knowledge. Why are we supposed to agree that George really does have
self-knowledge? Why hasn’t Shoemaker just outlined a strategy for faking or
confabulating self-knowledge? (Shoemaker himself, of course, is no behaviorist.)
Further, even we grant every step of the argument, and agree that George does
have self-knowledge, that doesn’t obviously show anything about us. In particu-
lar, it doesn’t show that we have no faculty of inner sense. Admittedly, if George
has self-knowledge, then we—at least, those of us who are “rational agents”—
could come by self-knowledge without deploying inner sense. But—given the
sophistication of George’s reasoning—why doesn’t this simply show that rational
agents have a backup to their faculty of inner sense? An analogy: imagine that, by
exploiting various subtle nonvisual cues (auditory, olfactory, etc.), a suitably
clever person could have the normal sort of knowledge of her environment but
without opening her eyes. When facing a strawberry, for instance, she immedi-
ately identifies it as red (suppose it gives off a distinctive odor). This would not
indicate that the visual system is a myth.26

26
This objection is also made in Kind 2003. Shoemaker’s reply (extrapolating back from his
1994) is that (a) the argument against the inner-sense theory does not assume that George’s
reasoning goes on in us (“obviously it doesn’t” (1994: 239)) and that (b) Mother Nature would
not have taken the trouble to install “an additional mechanism . . . whose impact on behavior is
completely redundant” (240). But if we do not in fact run through George’s reasoning, how can the
“availability of the reasoning” (240) explain our behavior? After all, going by Shoemaker’s own
description in his 1988, George’s behavior is not explained by the mere availability of the reasoning.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 INNER SENSE

2.3 Residual puzzles for inner sense


Understood merely as extravagant detectivism, the inner-sense theory survives
the previous eight objections. But some of the objections do highlight one
important point, that the metaphor of the “inner eye” is quite inapt. The alleged
faculty of inner sense is so unlike other certified perceptual faculties that it seems
to be a distinct genus of detection, rather than a species of perception. And this is
actually quite puzzling, since the motivation for the theory given by Churchland
(section 2.1) would lead us to expect that it is just as perceptual as other mechan-
isms specialized for detecting conditions of oneself, for instance proprioception.
Proprioception is unlike vision or audition in that there is no proprioceptive organ,
but like it in other respects, in particular that there are proprioceptive appearances,
and consequently proprioceptive illusions.27 In contrast, as Shoemaker points out:

It seems widely agreed that introspection does not have this feature, and this is perhaps
the most commonly given reason for denying that it should count as perception. No one
thinks that in being aware of a sensation or sensory experience, one has yet another
sensation or experience that is “of” the first one, and constitutes its appearing to one in a
particular way. No one thinks that one is aware of beliefs and thoughts by having
sensations or quasi-sense-experiences of them. And no one thinks that there is such a
thing as an introspective sense-experience of oneself, an introspective appearance of
oneself that relates to one’s beliefs about oneself as the visual experiences of things one
sees relate to one’s beliefs about those things. (1994: 207)28

Despite acknowledging that “this is an important difference between introspec-


tion and sense-perception as it actually is,” Shoemaker “refrain[s] from declaring
it fatal to the perceptual model,” but that has the effect of making the puzzle
harder to see. Churchland’s reasonable-sounding armchair speculations would
lead us to expect inner sense to have many marks of perception that it manifestly
doesn’t have. Why doesn’t it?29
On the credit side, the inner-sense theory offers a nice explanation of peculiar
access. For obvious architectural reasons, the (presumably neural) mechanism of
inner sense is only sensitive to the subject’s own mental states. In exactly the same
style, our faculty of proprioception explains the “peculiar access” we have to
the position of our own limbs.

27
For instance, the Pinocchio illusion, where one’s touched nose appears to grow (due to
stimulation of one’s wrist tendons).
28
See also Geach 1957: 107, Moran 2001: 14, Rosenthal 2005: 5, Fernández 2013: 31–2.
29
Here one might appeal to the non-perceptual inner-sense theory of Nichols and Stich 2003 (see
section 5.4).
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. RESIDUAL PUZZLES FOR INNER SENSE 

But what about privileged access? Can the inner-sense theory explain that? One
simple suggestion starts by noting that the (neural) causal chain from one’s first-
order mental state M to one’s belief that one is in M is shorter than the causal
chain starting from another’s mental state M to one’s belief that she is in M. The
former chain is entirely within the head; the latter is partly so, but extends
some distance outside. All else equal, the more distance traveled, the greater
the number of obstacles and sources of interference to be negotiated, with a
consequent increase in the possibilities for error. Is this why the deliverances of
inner sense are more likely to amount to knowledge?
At this level of abstraction the question cannot be answered: everything turns
on the (unknown) details. Compare visually diagnosing that another person
suffers from jaundice (an excess of bilirubin in the blood) with a self-diagnosis,
on the basis of the “internal symptom” of a headache—no looking in the mirror.
Yellow skin is a reliable although not infallible sign of jaundice; a headache is
considerably less probative. Although the causal chain in self-diagnosis does not
extend outside the head, more knowledge-conducive access is obtained by third-
person methods. For all that’s been said, inner sense might be as epistemologic-
ally unimpressive as self-diagnosing jaundice.
Finally, although Shoemaker does not succeed in showing self-blindness to be
impossible, it does not appear to actually occur. Blindness and similar perceptual
deficits are not merely hypothetical conditions, so why is self-blindness different?30
There may be no knock-down refutation of the inner-sense theory, but there
are at least grounds for dissatisfaction. It is time to examine some leading
alternatives.

30
This is taken up later, in section 7.2.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

3
Some Recent Approaches

I am by no means satisfied with my explanation of first person authority.


Davidson, “Reply to Bernhard Thöle”

3.1 Introduction
As Chapter 2 argued, the failings of the inner-sense theory are often more
apparent than real. In any event, recent approaches to self-knowledge are usually
advertised as taking a radically different route. This chapter surveys three prominent
examples, due to Davidson, Moran, and Bar-On. Others could have been chosen,1
but these three illustrate how radically different accounts of self-knowledge can
be, despite having some overlapping themes. All three philosophers emphasize the
linguistic expression of self-knowledge. Moran and Bar-On both think the main
problems are in important respects not epistemological. Davidson and Moran
concentrate on the propositional attitudes, belief in particular, and suggest that
another approach entirely will be required for knowledge of one’s sensations.

3.2 Davidson on first-person authority


Davidson begins his paper “First Person Authority” as follows:
When a person avers that he has a belief, hope, desire, or intention, there is a presumption
that he is not mistaken, a presumption that does not attach to his ascriptions of similar
mental states to others. Why should there be this asymmetry between attributions of
attitudes to our present selves and attributions of the same attitudes to other selves? What
accounts for the authority accorded first person present tense claims of this sort, and
denied second or third person claims? (1984a: 3)

1
E.g., Carruthers 2011, Fernández 2013, Cassam 2014. The account developed in this book is
closer to these three than to Davidson, Moran, and Bar-On, which is one reason for focusing on the
latter. Like the present account, the theories of Fernández, Moran, and (to some extent) Carruthers
find inspiration in Evans’ remarks about transparency (see section 1.2).
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. DAVIDSON ON FIRST - PERSON AUTHORITY 

According to Davidson, there is a “presumption” that present-tense first-person


attributions of belief, hope, and so on are correct, which distinguishes them
from third-person attributions. It is clear that Davidson would also accept that
there is a similar presumption that such first-person attributions amount to
knowledge. In terminology introduced in Chapter 1, the first sentence of the
quotation accordingly amounts to this: we take privileged access for granted.
That is not to say, of course, that we are right: perhaps this presumption is
mistaken. However, it is evident that Davidson thinks this presumption is correct
(see, e.g., 5); that is, Davidson thinks we have privileged access to our mental
states, in particular to the propositional attitudes. In Davidson’s terminology
(with complications to be noted), we enjoy “first person authority,” and it is this
that he attempts to explain.
What about peculiar access? After remarking that “our claims about our own
attitudes” can sometimes be mistaken, Davidson says:

It comes closer to characterizing first person authority to note that the self-attributer does
not normally base his claims on evidence or observation, nor does it normally make sense
to ask the self-attributer why he believes he has the beliefs, desires, or intentions he claims
to have. (1984a: 4)

Here Davidson is drawing a contrast between the way in which we know our own
minds and the way in which we know the minds of others: unlike the attributions
of mental states to others, “self-attributions are not based on evidence” (5). That
is, self-knowledge is (typically) not the result of reasoning from adequate evidence;
it is, we can say, (typically) unsupported.2 This amounts to giving a particular
gloss on peculiar access: access to our own minds is peculiar because it is
unsupported.3 In the next paragraph, lack of support is called a “feature of first
person authority” (5). So although Davidson clearly separates privileged and
peculiar access, his terminology has the undesirable effect of gluing them back
together (cf. the discussion of McKinsey in section 1.3).4
“First person authority” in the eponymous paper seems to be a combination of
privileged access and lack of support (or, less committally, peculiar access);

2
Cf. the (related but different) notions of “groundlessness” and “baselessness” in section 2.2.8.
3
Even if our self-knowledge is unsupported, this might not be a point of asymmetry. According
to some philosophers, we can simply perceive that another person wants a drink, say, without this
perceptual knowledge being based on any (distinct) evidence, for example on evidence concerning
the person’s bodily movements (see, in particular, McDowell 1982). Bar-On defends a view of this
kind (2004: 264–84); Davidson (as one might expect) rejects it (1991b: 205).
4
The adhesion explains why Davidson has Ryle denying that we have first-person authority
(5–6). (Cf. the discussion of Ryle in section 1.1.3.)
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 SOME RECENT APPROACHES

however, in “Knowing One’s Own Mind” Davidson characterizes it solely as


peculiar access:

[T]he problem I have set is how to explain [first-person authority,] the asymmetry
between the way in which a person knows about his contemporary mental states and
the way in which others know about them. (1987: 24)

And in “Epistemology Externalized,” first-person authority is, specifically, lack of


support:

[T]he fact that each person generally knows what he thinks without appeal or recourse to
evidence, and thus knows what is in his own mind in a way that no one can know what is
in the mind of another person. (1991a: 197)

As argued in Chapter 1, privileged and peculiar access are independent. However,


perhaps there is a connection between privileged access and the particular form
of peculiar access that Davidson endorses, namely lack of support. Perhaps
knowledge “without appeal or recourse to evidence” is especially secure. If so,
that would explain why Davidson sometimes omits mention of privileged access
when characterizing first-person authority: given peculiar access (specifically,
lack of support), privileged access comes along for free.
And that would appear to be Davidson’s view, at least in “Knowing One’s Own
Mind,” where privileged access is derived from lack of support:

Because we usually know what we believe (and desire and doubt and intend) without
needing or using evidence (even when it is available), our sincere avowals concerning our
present states of mind are not subject to the failings of conclusions based on evidence.
Thus sincere first person present tense claims about thoughts, while neither infallible or
incorrigible, have an authority no second or third person claim, or first person other-tense
claim, can have. To recognise this fact, however, is not to explain it. (1987: 16, emphasis
added)

However, in “First Person Authority,” he seems to resist exactly this step:

[T]he chief reason first person authority isn’t explained by the fact that self-attributions
are not based on evidence . . . is simply that claims that are not based on evidence do not in
general carry more authority than claims that are based on evidence, nor are they more
apt to be correct. (1984a: 5)

In fact, there is truth to both quotations. Suppose one takes some ostensible fact
E as evidence for H, and so infers H from E. If H is known on the basis of E, then
E must be known too. But E may be known even though H is not. First, one might
be mistaken in taking known E as evidence for H. Second, if known E is evidence
for H, H might still be unknown—indeed, it may even be false. Hence the
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. DAVIDSON ON FIRST - PERSON AUTHORITY 

probability that one’s belief in E amounts to knowledge is greater (perhaps not by


much) than the probability that one’s belief in H amounts to knowledge, and so
in this sense one has a kind of privileged access to E. A special case of this result is
when one’s belief in E itself is not based on further evidence.
That is the (modest amount of) truth in the first quotation. But it is the second
quotation that is exactly on point. Specifically, if one’s belief in E is not based on
evidence and H is inferred from some other claim E*, nothing follows about the
relative likelihood that one’s belief in E and one’s belief in H amount to know-
ledge. And this is the relevant sort of case, since facts about one’s own mental
states are usually not part of one’s evidence for the mental states of others, still
less are they the sole evidence. Hence the alleged fact that self-knowledge is
unsupported does not explain privileged access.5
In any event, when Davidson gets around to explaining “first person author-
ity,” he does primarily attempt to explain privileged access, not peculiar access
(and certainly not lack of support).6 He concentrates on belief, and on the special
case when belief is linguistically expressed. Davidson’s first step is to specify
evidence for the hypothesis that I hold a certain belief. Suppose—to take Davidson’s
example—that I utter the sentence ‘Wagner died happy.’ According to Davidson,
“if you or I or anyone knows”:
1. I hold ‘Wagner died happy’ true on this occasion,
and:
2. What I meant by ‘Wagner died happy’ on this occasion was that Wagner
died happy,
“then she knows what I believe” (1984a: 11)—namely that Wagner died happy—
or at least is in a position to know this. Davidson appears to be assuming, then,
that (1) and (2) entail that I believe that Wagner died happy. (To simplify the
discussion, and as a concession to Davidson, we may ignore the difference
between expression meaning and speaker meaning.7)
Davidson’s second step is to argue that speakers are generally in a better
position to “know what their words mean” than their interpreters (12–14).

5
Nowadays, a common and plausible view in the epistemology of perception is that some
perceptual knowledge of one’s immediate environment is unsupported; in particular, it is not
based on evidence about “appearances” (recall the discussion in section 1.5). Not surprisingly (see
fn. 3), McDowell agrees.
6
In fact, an explanation of peculiar access does drop out of Davidson’s explanation of privileged
access. See fn. 8 below.
7
Davidson thinks the two are connected to this extent: “what [a speaker’s] words mean is
(generally) what he intends them to mean” (1984a: 14). See also Davidson 1987, Davidson 1993: 250.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 SOME RECENT APPROACHES

Therefore I have privileged access to (2). The third step combines the first two to
yield an explanation of privileged access to what I believe.
Although Davidson is not completely explicit about the third step, presumably
the explanation runs as follows. You and I both have the same sort of evidence for
the hypothesis that I believe that Wagner died happy, namely that I hold ‘Wagner
died happy’ true, and mean that Wagner died happy by that sentence. But my
access to this evidence is privileged, because I have privileged access to the second
component—what my sentence means—and (a further necessary assumption)
you do not have privileged access to the first component—what sentences I hold
true. Hence, since I have privileged access to the evidence, I have privileged access
to what it supports, namely that I believe that Wagner died happy.8
This interpretation is standard (see, e.g., Thöle 1993: 239, Lepore and Ludwig
2005: ch. 20); but it does jar with the text at one obvious point. The explanation of
privileged access assumes that my knowledge of what I believe is “based on
evidence”—that, contrary to what Davidson repeatedly claims, it is not unsup-
ported. However, it is hard to see what else Davidson could have had in mind.9
Commentators have devoted much space to discussing the second step in
Davidson’s argument, where he tries to establish that “the speaker [not the
hearer] usually knows what he [the speaker] means” (1984a: 14), and the general
consensus is that this second step fails.10 (For helpful discussions see Lepore and
Ludwig 2005: 352–69 and Child 2007.) However, since the conclusion of the
second step is plausible, even if Davidson’s argument for it is not, it is worth
exploring two problems with the first step.11
One problem concerns (1), the first component of evidence for the hypothesis
that I believe that Wagner died happy, namely that I hold ‘Wagner died happy’
(as uttered on this occasion) true. ‘Holding true’ is Davidsonian jargon, of course;

8
Davidson also thinks I know what my words mean in a way that others cannot (1984a: 13),
which gives him an explanation of my peculiar access to what I believe.
9
Davidson might reply by claiming only to show how privileged access is possible: evidence is
available that would give us privileged access, if we were to take advantage of it. Cf. the opening
remarks of “Radical Interpretation” (Davidson 1973: 125).
10
Davison speaks of a “presumption” that the speaker usually knows what he means, but it is
clear that Davidson thinks this presumption is true.
11
One apparent limitation with the whole argument (not discussed here) is that it officially
applies only to beliefs that are linguistically expressed. Without supplementation it does not cover
someone who knows she believes that it’s raining, but does not bother to assert that it’s raining.
A harder example is someone who does not speak a language at all—say, a young child or a
chimpanzee. Clearly Davidson’s basic strategy of argument cannot be extended to explain how the
languageless enjoy privileged access to their beliefs (if indeed they do). But that would not trouble
Davidson, since he denies they have any (Davidson 1975).
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. DAVIDSON ON FIRST - PERSON AUTHORITY 

what does it mean? As one might have guessed, to hold a sentence true is
to believe that it is true: holding true is “one special kind of belief, the belief
that a sentence is true” (1974: 149–50). So Davidson is not just representing
my knowledge of my beliefs as not unsupported, but specifically based on
knowledge of other beliefs of mine. Specifically, my knowledge that I believe
that Wagner died happy is based on my knowledge that I believe that ‘Wagner
died happy’ is true. And how do I know that I have this metalinguistic belief? It
is easy to see that a regress looms if the same account is supposed to apply here.12
Unpacking (1) as the proposition that I believe that ‘Wagner died happy’ is
true points to the second problem. Recall that (1) together with (2)—that by
uttering ‘Wagner died happy’ I meant that Wagner died happy—are supposed to
entail that I believe that Wagner died happy. Is this correct?
Although speakers mostly have true beliefs about the meanings of their
utterances, they sometimes do not. Suppose my utterance does mean that
Wagner (the German composer) died happy. If both Richard Wagner and the
lesser-known Austrian composer Josef Wagner are salient, I might well be unsure
whether I meant that Richard Wagner died happy or that Josef Wagner died
happy. Similarly, I might be unsure whether my utterance is true iff Richard
Wagner died happy or true iff Josef Wagner died happy. And, in fact, Davidson
himself agrees, endorsing the stronger claim that speakers can be in error: “The
speaker can be wrong about what his own words mean” (1974: 13).
Suppose I utter ‘Wagner died happy’ in response to the question What is
Davidson’s opinion about Wagner? As uttered by me on this occasion, that
sentence means that Richard Wagner (the composer) died happy; I gave that
answer because I believe that Davidson’s opinion about Richard Wagner is
that he died happy. However, I myself have no idea whether Davidson is right
about that. Further—because both Richard and Josef were salient—I do not
believe that if ‘Wagner died happy’ (as uttered on this occasion) is true then
Richard Wagner died happy.
So far, this is a situation in which these three claims are true:
2. What I meant by ‘Wagner died happy’ on this occasion was that (Richard)
Wagner died happy.

12
If the same account applies, then my knowledge that I believe that s is true rests in part on my
knowledge that I believe that ‘s is true’ is true, which in turn rests on my knowledge that ‘‘s is true’ is
true’ is true, and so on. Apart from the difficulty that these metalinguistic beliefs are not verbally
expressed, this sort of non-well-founded chain of evidence is suspect. (A similar point is made in
Lepore and Ludwig 2005: 360; see also Wright 2001b: 349.)
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 SOME RECENT APPROACHES

a. I neither believe nor disbelieve that Wagner died happy.


b. I do not believe that if ‘Wagner died happy’ is true then Wagner died happy.
Let us add some more details. First, I also believe that Davidson’s opinion
about Josef Wagner is that he died happy, and have no idea whether Davidson is
right about that either. Second, someone whom I trust purports to know what
I meant by uttering ‘Wagner died happy’ (perhaps based on what I was saying
earlier, facts that I have forgotten). She tells me that whatever I did mean, my
utterance was true, and I believe her. So this is also true:
1. I hold ‘Wagner died happy’ true on this occasion; that is, I believe that
‘Wagner died happy,’ as uttered on this occasion, is true.
The situation with the added details is apparently also possible. If I believed that if
‘Wagner died happy’ is true then Wagner died happy (contra (b)), then there
would be a problem, because then my beliefs would trivially entail a proposition
that according to (a) I do not believe, namely that Wagner died happy. And if
I believed that Josef Wagner didn’t die happy (contra the final details), then by a
process of elimination I could work out that Richard Wagner did, again running
into conflict with (a). So the following is possible: (1) and (2) are true, and it is
false that I believe that (Richard) Wagner died happy. Hence (1) and (2) do not
entail that I believe that Wagner died happy.
The obvious repair is to restore the crucial assumption that I believe that if
‘Wagner died happy’ is true then Wagner died happy. And once that assumption
is added, (2) is redundant. That is, the two pieces of information, knowledge of
which would give “you or I or anyone” knowledge of what I believe, are:
1. I believe that ‘Wagner died happy’ as uttered on this occasion is true.
And:
2*. I believe that if ‘Wagner died happy’ as uttered on this occasion is true
then Wagner died happy.
Or, alternatively:
2**. I believe that ‘Wagner died happy’ as uttered on this occasion means
that Wagner died happy.
With the additional assumption that I will perform elementary inferences, (1)
and (2*), or (1) and (2**), entail that I believe that Wagner died happy.
Now we can see that the second step of Davidson’s argument is misdirected.
He should be arguing that I have privileged access to (2**) (or 2*), and so to the
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. MORAN ON SELF - CONSTITUTION AND RATIONAL AGENCY 

conclusion that it (in conjunction with (1)) entails, namely that I believe that
Wagner died happy. He actually argues that I have privileged access to (2), which
amounts to arguing that (2**) (with ‘believe’ replaced by ‘know’) is significantly
more likely to be true than:
3. My audience knows that s as uttered on this occasion means that p.
Plausible, no doubt, but what Davidson needs to establish is something else.
Namely, speakers are in a better position than their audience to know that they
have beliefs about what their sentences mean (or the conditions under which their
sentences are true).
Davidson’s strategy for explaining privileged access to my belief that p is to
argue that I have privileged access to evidence that entails that I believe that p.
When properly spelled out, that evidence has two components: that I believe that
a sentence s I utter is true, and that I know (hence believe) that s means that p.
Not only does Davidson fail to show that I have privileged access to that evidence,
but both components of evidence concern what I believe. The right account of
privileged access must be elsewhere.

3.3 Moran on self-constitution and rational agency


In “Three Varieties of Knowledge” (Davidson 1991b), Davidson argues that
self-knowledge, knowledge of other minds, and knowledge of the external
world are each “irreducible” to, and necessary conditions of, the other two. So
although Davidson thinks that self-knowledge is a special kind of knowledge, the
same goes for the other two varieties, the three forming a pleasing symmetry. One
can think of Moran’s Authority and Estrangement as breaking the Davidsonian
symmetry. A main theme of that book is that self-knowledge is misleadingly
conceived as one of “epistemic access (whether quasi-perceptual or not) to
a special realm” (Moran 2001: 32). In this respect, self-knowledge is unlike
Davidson’s other two varieties. The problem of self-knowledge is as much one
of moral psychology as it is of epistemology: we must think of “[t]he special
features of first-person access . . . in terms of the special responsibilities the person
has in virtue of the mental life in question being his own” (2001: 32).
Moran’s account gives a central role to the “transparent” nature of belief, as
expressed in the following passage from Evans, quoted earlier in section 1.2:

[I]n making a self-ascription of belief, one’s eyes are, so to speak, or occasionally literally,
directed outward—upon the world. If someone asks me “Do you think there is going to be
a third world war?,” I must attend, in answering him, to precisely the same outward
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 SOME RECENT APPROACHES

phenomena as I would attend to if I were answering the question “Will there be a third
world war?”. (Evans 1982: 225)

Moran notes that sometimes my beliefs are only accessible to me through third-
person means, for instance in “various familiar therapeutic contexts” (2001: 85).
But in humdrum cases the transparency procedure applies, and “I can report on
my belief about X by considering (nothing but) X itself” (84). That is, my verbal
answer to the question Do I believe P? typically obeys the “Transparency
Condition”:
A statement of one’s belief about X is said to obey the Transparency Condition when the
statement is made by consideration of the facts about X itself, and not by either an
“inward glance” or by observation of one’s own behavior. (2001: 101)13

According to Moran, transparency shows that arriving at self-knowledge (spe-


cifically, knowledge of one’s beliefs) is not accurately viewed as a process of self-
discovery, but rather as a process of self-constitution. Coming to know whether
one believes P is not a matter of taking a “theoretical” or disinterested stance
toward oneself, of the sort one adopts toward another person when his beliefs are
the subject matter of inquiry. Rather, it is a matter of “making up one’s mind” as
to the truth of P.
Moran’s argument from transparency to the self-constitution thesis makes use
of a distinction between “theoretical” and “practical or deliberative” questions:

[A] theoretical question about oneself . . . is one that is answered by a discovery of the fact
of which one was ignorant, whereas a practical or deliberative question is answered by a
decision or commitment of some sort and it is not a response to ignorance of some
antecedent fact about oneself. (Moran 2001: 58, emphasis added)

And:
[A] ‘deliberative’ question about one’s state of mind . . . [is] a question that is answered by
making up one’s mind, one way or the other, coming to some resolution. (Moran 2003:
404, emphasis added)

For example, distinguish two sorts of situations in which one might ask the
question What will I wear? (see Moran 2001: 56; cf. Anscombe 1957: §2). First,
one is preparing to get dressed for the annual philosophy department party.
Second, one has just been sentenced to five years for embezzling the philosophy
department funds, and has yet to be issued with standard prison clothing. In the
first case, the question calls for a decision: one considers the sartorial pros and

13
Note the “claim of transparency” is supposed to cover negative answers to the question Do
I believe P?. This raises some complications that are discussed later in section 5.2.5.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. MORAN ON SELF - CONSTITUTION AND RATIONAL AGENCY 

cons, and plumps for the purple tie. In the second case, the question is answered
by a discovery: the judge announces that prisoners in Massachusetts wear orange
jumpsuits.
As this example shows, the distinction is not strictly speaking one between
questions—ignoring temporal complications, it is the same question both times—
but rather between ways of answering questions. And, indeed, Moran later writes
of answering a question in “deliberative or theoretical spirit,” taking a “delibera-
tive or theoretical stance” to a question, and so forth.14
The distinction applies to questions like Do I believe P?. One might address this
question in a theoretical spirit, treating it “as a more or less purely psychological
question about a certain person, as one may enquire into the beliefs of someone
else” (Moran 2001: 67). Alternatively, one might address this question in a
deliberative spirit, as a matter of making up one’s mind about P. Take, for
example, the question Do I believe that Alice is a threat to my career?, as asked
by her colleague Bert. After looking back over his behavior toward Alice—
anonymously rejecting one of Alice’s papers that criticizes Bert’s pet theory,
etc.—Bert might conclude that he has this belief. Alternatively, Bert might
address the question in a deliberative spirit, and investigate whether Alice really
is a threat to Bert’s career. Perhaps the result of the investigation is that Alice is
harmless, and Bert thereby concludes that he believes that Alice is not a threat.
We can imagine Bert addressing the question in both a deliberative and theor-
etical spirit, raising the uncomfortable possibility of discovering that he has
inconsistent beliefs.15
Here is how Moran links the “deliberative/theoretical” distinction with
transparency:
With respect to belief, the claim of transparency is that from within the first-person
perspective, I treat the question of my belief about P as equivalent to the question of the
truth of P. What I think we can see now is that the basis for this equivalence hinges on the
role of deliberative considerations about one’s attitudes. For what the “logical” claim of

14
See Moran 2001: 63, 4, 5, 7.
15
At one point Moran contrasts the two ways of answering the question Do I believe P? as
follows:
In characterizing the two sorts of questions one may direct toward one’s state of mind,
the term ‘deliberative’ is best seen at this point in contrast to ‘theoretical’, the primary
point being to mark the difference between that enquiry which terminates in a true
description of my state, and one which terminates in the formulation or endorsement
of an attitude. (2001: 63)
However, this is misleading (and is not Moran’s considered view). In successfully answering the
question Do I believe P?, whether in a deliberative or theoretical spirit, one comes to have a true belief
about one’s beliefs, and so in both cases the inquiry “terminates in a true description of [one’s] state.”
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 SOME RECENT APPROACHES

transparency requires is the deferral of the theoretical question “What do I believe?” to the
deliberative question “What am I to believe?”. And in the case of the attitude of belief,
answering a deliberative question is a matter of determining what is true.
When we unpack the idea in this way, we see that the vehicle of transparency in each
case lies in the requirement that I address myself to the question of my state of mind in a
deliberative spirit, deciding and declaring myself on the matter, and not confront the
question as a purely psychological one about the beliefs of someone who happens also to
be me. (2001: 63)

Suppose I ask ‘Do I believe P?’ and that I answer ‘I believe P’ by determining that
P is true. Then, according to Moran, I have answered this question by “a decision
or commitment of some sort,” and not “by a discovery of the fact of which I was
ignorant.” Transparency shows, in other words, that knowledge that one believes
P, when arrived at by considering whether P is true, is a matter of “making up
one’s mind” that P is true.
This is too quick. Consider the question Do I believe that I live in Cambridge,
MA?, or Do I believe that Moran is the author of Authority and Estrangement?.
If any questions about what I believe can be answered transparently, surely these
can. And in considering the relevant facts of location and authorship, I do not
need to make up my mind. On the contrary, it is already made up. I have believed
for some time that I live in Cambridge, and that Moran is the author of Authority
and Estrangement. I can know that I believe I live in Cambridge, for example, by
remembering the non-psychological fact that I live in Cambridge.
This illustrates one respect in which Evans’ “third world war” example is
misleading. It is natural to imagine the question Do you think there is going to
be a third world war? asked in a context in which I have not devoted much
thought to the topic. If I reply ‘Yes,’ this will be because I have taken a moment to
reflect on the relevant geopolitical facts and have “come to some resolution.” But
in cases like my belief that I live in Cambridge, no resolution is required.16
In fact, Evans’ example is misleading in another respect. In a typical context,
someone who says to me ‘Do you think there is going to be a third world war?’ is
interested in my considered opinion about the matter. Even if I think, at the
time the question is asked, that a third world war is imminent, it would be helpful
to the questioner to give the matter more thought before replying. And we may
suppose that, after a few minutes of carefully weighing some recently acquired
evidence, I reverse my position completely, and think that the second world war
will have no successor. I then reply, correctly and cooperatively, ‘No, I don’t think

16
For the observation that transparency applies in cases where one already believes P, see
Peacocke 1998: 72–3, Falvey 2000: 81–2.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. MORAN ON SELF - CONSTITUTION AND RATIONAL AGENCY 

there is going to be a third world war.’ But this sort of conversational exchange is
a poor model for the transparency procedure. If I am wondering whether I now
believe that p, it would be ill-advised to reconsider the issue of whether p, because
I run some risk of changing my mind, and so not reporting the belief that I
now have.
In later work, Moran clearly acknowledges the point that transparency applies
to cases where my mind is made up. In answering the question Do you believe
that Jefferson Davies was the President of the Confederacy during the American
Civil War? (an example from Shoemaker 2003) by recalling the historical
facts, my answer will be “faithful to what I already believed” (Moran 2011: 221;
see also Moran 2003: 402–3). This suggests that Moran’s gloss on the transpar-
ency procedure as involving “a decision or commitment of some sort” is not the
best expression of his view. Indeed, he sometimes formulates his main idea in
other ways:

[O]nly if I can see my own belief as somehow “up to me” will it make sense for me to
answer a question as to what I believe about something by reflecting exclusively on that
very thing, the object of my belief. (2001: 66–7)

And:

When I say that what I believe is up to me, I mean that, unlike the case of sensations or
other non-intentional states, I take what I believe to be answerable to my sense of reasons
and justification, and I take myself to be responsible for making my belief conform to my
sense of the reasons in favor or against. (2003: 405–6)

Here Moran is drawing a link between transparency and one’s “rational agency
. . . the ordinary ability to respond to reasons in one’s thinking, to consider
reasons for and against some belief and respond accordingly” (2011: 212).
Specifically, the suggestion is that the transparency procedure can only deliver
knowledge of what I believe if it is “legitimate for me to see myself as playing a
role in the determination of what I believe generally, not in the sense that beliefs
typically owe their existence to acts of deliberation but that the responsiveness to
reasons that belongs to beliefs is an expression of the person’s rational agency”
(213). And whatever this comes to, exactly, it does not seem in conflict with the
observation that transparency applies in cases where my mind is already made up.
Moran’s claim that transparency and rational agency are closely connected will
be examined in detail in Chapter 4 (section 4.3). But even in advance of the
details, there are reasons to be suspicious. Moran concentrates almost exclusively
on the transparency of belief, but (as noted in section 1.2) perception seems to
provide other examples. One can know that one sees the cat by a (literal)
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 SOME RECENT APPROACHES

“outward look” at the cat. However, since one can see what is in fact the cat
without believing that it is a cat, seeing the cat does not involve “the ordinary
ability to respond to reasons in one’s thinking,” at least in any straightforward
way. If transparency applies to perception as well as belief, then the connection
with rational agency cannot be as tight as Moran supposes.17

3.4 Bar-On’s neo-expressivism


The project of Bar-On’s Speaking My Mind is in one way much more ambitious than
the two accounts just discussed, purporting to cover the entire spectrum of mental
states. (However, object-entailing states like seeing a red cardinal are assumed not to
be mental.18) But in another way it is considerably less ambitious—as will become
clear, self-knowledge is placed firmly on the back burner.
“Avowals” are utterances that “[self-]ascribe [current] states of mind”; for
instance, utterances of ‘I have a terrible headache’ and ‘I’m finding this painting
utterly puzzling’ (Bar-On 2004: 1). And avowals, “when compared to ordinary
empirical reports . . . appear to enjoy distinctive security” (1), which Bar-On
elaborates as follows:
A subject who avows being tired, or scared of something, or thinking that p, is normally
presumed to have the last word on the relevant matters; we would not presume to criticize
her self-ascription or to reject it on the basis of our contrary judgement. Furthermore,
unlike ordinary empirical reports, and somewhat like apriori statements, avowals
are issued with a very high degree of confidence and are not easily subjected to doubt.
(2004: 3)

The project of Speaking My Mind is to explain why avowals have this distinctive
security.
Bar-On’s guiding idea is that avowals “can be seen as pieces of expressive
behavior, similar in certain ways to bits of behavior that naturally express
subjects’ states” (227). Crying and moaning are natural expressions of pain,
yawning is a natural expression of tiredness, reaching for beer is a natural
expression of the desire for beer, and so on. In some important sense, avowals
are supposed to be like that. In what sense, though? It will be useful to begin with
the simplest answer.

17
Looking ahead, the account of belief in Chapter 5 classifies Moran’s special cases together with
examples where one’s mind is already made up, as both involving (in Moran’s phrase) “epistemic
access . . . to a special realm.”
18
See Bar-On 2004: 16; Bar-On would presumably also exclude factive states like knowing.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. BAR - ON ’ S NEO - EXPRESSIVISM 

3.4.1 Simple expressivism


According to “Simple Expressivism” (2004: 228), a position often associated with
Wittgenstein, the comparison between avowals and natural expressions is very
close—too close, in fact, for Bar-On to endorse it. Crying is a natural expression
of pain, but someone who cries because he is in pain is not asserting that he is in
pain. And someone who cries while slicing shallots cannot thereby be convicted
of making a false claim about his sensations. The simple expressivist takes a
similar view of avowals: an utterance of ‘I am in pain’ is not true or false, any
more than an act of crying is. The utterance is a verbal replacement for crying
and—contrary to appearances—performs much the same function. Avowals are
never reports or descriptions of one’s mental states. The simple expressivist
admits that in some cases to make an avowal will involve an assertion. For
instance, on the simple expressivist account of belief, to (assertively) utter
‘I believe that p’ is to assert (perhaps tentatively) that p. But such assertions are
not about one’s state of mind.
If one thought that the chief problem was to explain why self-ascriptions of
sensations (“phenomenal avowals,” in Bar-On’s terminology), like ‘I am in pain,’
are never false, simple expressivism does that very nicely. Unfortunately, we also
think, contrary to simple expressivism, that such avowals are often true. And the
idea that uttering ‘Doctor, I have a sharp pain in my knee’ amounts to a prolix
way of involuntarily clutching one’s knee hasn’t much initial plausibility.
On the other hand, the simple expressivist account of belief can seem appealing
at first glance. Typically, if someone says ‘I believe Smith is in the pub,’ she is
primarily concerned to make a cautious claim about Smith’s whereabouts, rather
than her own state of mind.19 (That sentence might well be used to answer the
question, ‘Where is Smith?’.) But simple expressivism could not be more hopeless
at explaining why self-ascriptions of belief are never, or rarely, false. According to
simple expressivism, assertive utterances of ‘I believe that p’ amount to assertions
of the proposition that p, and it is not at all unusual for such assertions to be
false. If simple expressivism is correct, someone who claims that she believes that
Iraq had WMD is bound to be mistaken.
Simple expressivism, as Bar-On points out (2004: 233), is akin to emotivism in
ethics, on which ‘Stealing is wrong’ is not used to state a fact, but rather to express

19
Cf. Moran on the “presentational view” (2001: 70–2). For a general examination of these
“parenthetical” uses of verbs like ‘believe’ and ‘think,’ see Simons 2007. The pertinent use of such
verbs in this book is of course non-parenthetical, in which they describe states of mind.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 SOME RECENT APPROACHES

the speaker’s disapproval of stealing. Simple expressivism thus inherits the


Frege-Geach-Ross problem of explaining embedded occurrences of present-
tense self-ascriptions, such as ‘If I’m feeling sick then I’m trying to disguise it’
and ‘Scooter knows that I believe that Iraq had WMD.’20 According to the simple
expressivist, ‘I’m feeling seasick,’ when it occurs unembedded, can be (roughly)
paraphrased as ‘Yuk!’, but ‘If yuk! . . . ’ makes no sense at all. And if the embedded
occurrence of ‘I believe that Iraq had WMD’ has its unembedded paraphrase of
‘Iraq had WMD,’ then ‘Scooter knows that I believe that Iraq had WMD’ is
incorrectly analyzed as ‘Scooter knows that Iraq had WMD.’
If all this isn’t bad enough, there is an obvious difference between simple
expressivism and emotivism in ethics that reflects badly on the former. Accord-
ing to the emotivist, there are no moral facts to be stated in the first place; that is
why utterances of sentences like ‘Stealing is wrong’ have no truth values.
However, the simple expressivist recognizes the full range of psychological
facts like everyone else: you are in pain, Smith wants a beer, Jones believes it’s
raining, and so forth. The only difference is semantic: for some inexplicable
reason, you cannot state the (important) fact that you are in pain by uttering ‘I
am in pain.’ Instead, ascribing mental states currently to oneself requires
speaking in the third person, like the former US presidential candidate Bob
Dole: ‘Bob Dole is in pain,’ ‘Bob Dole wants a beer,’ and the like. Philosophy has
no shortage of bizarre and ill-motivated claims, but simple expressivism takes
the biscuit.
Bar-On’s neo-expressivism stands to simple expressivism roughly as Black-
burn’s quasi-realism (in its formulation in Blackburn 1998) stands to emotivism.
Gone is the distinctive claim that ‘I am in pain’ has no truth value. Semantically,
it now comes in for an orthodox treatment (in Bar-On’s terminology, “Semantic
Continuity” is preserved): ‘I am in pain,’ as uttered by S, is true iff S is in pain.
As with quasi-realism, the problem is to explain the residual insight left, once the
undrinkable truth-valueless brew has been thrown out of the window.

3.4.2 Two questions, one answer


Speaking My Mind is structured around two main questions. First:
(i) What accounts for the unparalleled security of avowals? Why is it that avowals,
understood as true or false ascriptions of contingent states to an individual, are so rarely
questioned or corrected, are generally so resistant to ordinary epistemic assessments, and
are so strongly presumed to be true? (Bar-On 2004: 11)

20
For a particularly clear discussion, see Soames 2003: 309–14.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. BAR - ON ’ S NEO - EXPRESSIVISM 

Second:
(ii) Do avowals serve to articulate privileged self-knowledge? If so, what qualifies avowals as
articles of knowledge at all, and what is the source of the privileged status of this
knowledge? (2004: 11)

The interpretation of the first question is not entirely straightforward. What is the
“unparalleled security of avowals” that it presupposes? The terminology naturally
suggests “privileged self-knowledge,” or something of the sort. However, the
second question indicates that this is incorrect, because the second question is
precisely whether we have such privileged knowledge at all.
That “unparalleled security” is not what it sounds like is also indicated by the
second sentence of (i), at least if it is taken as a gloss on the first. That avowals
enjoy security, on this interpretation, is really the sociological claim that we
assume that avowals express beliefs that are more likely to amount to knowledge,
or at least more likely to be true, than the corresponding third-person attribu-
tions of mental states. (See also the first quoted passage above.) And this
sociological understanding of security fits with the discussion a few pages later,
where the simple expressivist is said to have an answer to (i) (see 14 and also 344),
and so must agree with (i)’s presupposition that avowals enjoy “security.” Since
the simple expressivist denies that avowals are true (or false), security can hardly
be characterized in a way that entails that they have truth values. What’s more,
Bar-On’s own answer to (i) is advertised as “non-epistemic, in that it will not
derive avowals’ special security from the security of a special epistemic method,
or privileged epistemic access” (11). If “special security” involves a greater
likelihood of knowledge (or truth), rather than our assumption of it, presumably
no defensible answer to (i) could be “non-epistemic.”21
As Bar-On explains in the first chapter, only the first of her two questions will
receive a plain answer:
My first goal will be to motivate and develop a non-epistemic answer to question
(i) . . . [The answer will not] resort to the Cartesian idea that avowals concern a special

21
However, other passages suggest the opposite interpretation, on which the security of avowals
is an epistemic matter—avowals express privileged self-knowledge, or something along these lines.
“Another way of putting this question is: How can avowals be understood in a way that preserves
Semantic Continuity while fully respecting Epistemic Asymmetry?” (11). Semantic Continuity was
mentioned at the end of section 3.4.1; Epistemic Asymmetry is explained as follows: “When
compared to other non-apriori ascriptions, even non-self-verifying avowals are much more certain,
much less subject to ordinary mistakes, significantly less open to a range of common doubts, and
highly resistant to straightforward correction” (10, emphasis added). And Bar-On emphasizes that
her answer to (i) involves explaining Epistemic Asymmetry, and so explaining why avowals are
“much more certain, much less subject to ordinary mistakes.” But the non-epistemic interpretation
of security makes the best overall sense of the book.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 SOME RECENT APPROACHES

subject matter: viz., states of immaterial minds. Having offered a non-epistemic, non-
Cartesian answer to (i), I will then try to show that this answer is consistent with a range of
non-deflationary answers to question (ii). Even if one does not regard avowals’ distinctive
security to be a matter of their epistemic pedigree, one can still maintain that we do have
privileged self-knowledge that is articulated by avowals. Furthermore, one can attempt to
explain the privileged status of self-knowledge partly in terms of the special security of
avowals understood non-epistemically. (2004: 15; first emphasis added)

All of the “non-deflationary answers to question (i)” that Bar-On considers entail
that “avowals serve to articulate privileged self-knowledge” (see ch. 9 and 405).
So, after giving her answer to (i), Bar-On argues the answer is consistent with
various views on which avowals articulate privileged self-knowledge.
Now all this might seem a little disappointing. The subtitle of Speaking My
Mind is ‘Expression and Self-Knowledge,’ but self-knowledge is not the chief
topic. We aren’t getting an explanation of why self-knowledge is “privileged,” but
at best an explanation of why we presume this. Admittedly, the explanation of the
presumption is argued to be consistent with the presumption’s truth, but that
falls conspicuously short of a theory of self-knowledge.
Thus the ambition of Speaking My Mind is less sweeping than one might have
expected. On the other hand, the attempt to answer (i) is ambitious enough. The
rest of this chapter examines Bar-On’s answer to (i), which occupies a substantial
portion of the book.
Question (i) can be divided into two parts. First: why are avowals “so rarely
questioned or corrected, are generally so resistant to ordinary epistemic assess-
ments?” (What this resistance amounts to, exactly, will be elaborated in section
3.4.3.) Second: why are avowals “so strongly presumed to be true?” Bar-On
answers the two parts separately. Her explanation of why avowals are resistant
to ordinary epistemic assessments appeals to two types of “immunity to error”
that avowals are said to enjoy: immunity to error through misidentification and,
in Bar-On’s terminology, “immunity to error through misascription” (194). Neo-
Expressivism is used to answer the second part, to explain the strong presump-
tion of truth. Let us take these two parts of question (i) in turn.

3.4.3 Immunity to error through misidentification and misascription


The phenomenon of “immunity to error through misidentification” is familiar.
It was first noted by Wittgenstein in The Blue Book (1969: 66–7), and explored
further by (in particular) Shoemaker (1968) and Evans in section 6.6 of The
Varieties of Reference. (It briefly made an appearance earlier in section 2.2.2.) As
Bar-On explains:
[I]n the case of ascriptions that are immune to error of misidentification (IETM, for short),
reasons for retracting the ascription a is F (e.g., that I am sitting on a chair, or that I have a
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. BAR - ON ’ S NEO - EXPRESSIVISM 

toothache) are grounds for abandoning the existential judgment, that someone is F. For, in
such cases, one has no other reason for thinking that someone is F over and above the
thought that she herself is F. Not so in case of ascriptions that are not IETM. If I discover
that it is not true that Sheila is sitting on a chair, my belief that someone is sitting on a chair
may still survive. (2004: 58)

Suppose, to simplify Bar-On’s example slightly, that I believe that I am sitting.


If this belief is arrived at in the usual way (that is, by proprioception and the sense
of touch), then it does not rest on two independent pieces of evidence of the
following sort: that the so-and-so is sitting, and that I am the so-and-so. For if it
did, the second piece of evidence could be undermined, leaving the first intact,
allowing me reasonably to ask: someone (the so-and-so) is sitting, but is it me? In
these circumstances, however, that question has no purchase: absent any other
evidence for the hypothesis that someone is sitting, reasons for retracting the
claim that I am sitting are also reasons for retracting the existential claim.
This example brings out two important points. First, the distinction is not
between beliefs (or propositions) simpliciter, but rather between beliefs relative to
evidence. As Bar-On says, “whether an ascription is IETM or not . . . is a matter of
the basis on which the ascription is made” (2004: 88; see also Shoemaker 1968
and Evans 1982: 219). I might believe that I am sitting because, looking at a
mirror, I see a seated man who is my spitting image. Formed on the basis of this
evidence, my belief is not IETM: if I discover that I am not that man, that does
not impugn my evidence for the proposition that he is sitting. Second, the
phenomenon of IETM does not exclusively concern psychological beliefs, like
the belief that I have a toothache: sitting is not a state of mind.
A third point is also important. IETM has no special connection with the first-
person pronoun: beliefs expressed using demonstratives and proper names
provide other examples. Suppose I see a single spot against a plain background,
and judge that it is moving—it looks that way. If I discover that that spot is not
moving, then this undermines my perceptual evidence for the proposition that
something is moving. Similarly, I might believe that Kripke lectured on identity
at Harvard last year while having completely forgotten the belief ’s origins, and
without having any relevant identifying knowledge to the effect that Kripke is the
so-and-so. If some apparent authority tells me that Kripke hasn’t been to
Cambridge for ages, then—absent any other evidence—this undercuts my belief
that someone lectured on identity at Harvard last year.22

22
In fact, Bar-On claims that “ascriptions involving proper names are not in general candidates
for being IETM” (2004: 69).
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 SOME RECENT APPROACHES

Although avowals are typically IETM, Bar-On observes that this fact alone
cannot explain why they differ from non-mental self-ascriptions, because many
of the latter are also IETM. Still, she thinks that IETM “goes some way toward
explaining the security of avowals” (190). However, it is hard to see how IETM
helps at all.
A person’s belief that a is F is subject to error through misidentification
(SETM) iff it is based on certain evidence—that the so-and-so is F, and that a
is the so-and-so, or something similar. Since a belief is IETM iff it is not SETM,
a person’s belief that a is F is IETM iff it is not based on certain evidence; unlike
SETM, IETM is therefore not a feature of a belief that bestows any epistemo-
logical value on it. Consider my belief that Kripke lectured on identity at Harvard,
and suppose that I never had any reason to hold it; epistemologically, my belief is
no better than a guess. My belief is IETM, yet it is quite worthless, and is very
likely false. How could the fact that my belief is IETM (or, alternatively, known or
believed to be IETM) make it “resistant to ordinary epistemic assessments”?
The terminology of ‘immunity to error’ can obscure this point. If my belief
is “not open to a certain kind of error” (195–6), isn’t that something to be said
for it? In places, Bar-On can be read as agreeing. For instance, she claims that
“a person who is immune to error in some domain does not go astray in her
pronouncements” (200). But, as she immediately goes on to point out, a person
whose belief is IETM “simply does go down certain paths.” Say a person’s belief
about someone’s phone number is subject to error through misreading the phone
book iff it is based on the evidence of phone book listings. If my belief about Bar-
On’s number is immune to error through misreading the phone book, I have not
“gone down a certain path” (namely, looked at the phone book), and have
thereby insulated myself from getting Bar-On’s number wrong through misread-
ing the phone book. Yet, of course, immunity to this kind of error hardly offers
me protection from epistemic criticism—quite the contrary (cf. 200).
This raises the suspicion that Bar-On’s “ascriptive immunity to error” is not
going to fare any better at explaining the distinctive security of avowals. That
notion is introduced as follows:

Now consider the ascriptive part of avowals. In the normal case, as I say or think, “I am
feeling terribly thirsty”, it would seem as out of place to suggest, “I am feeling something,
but is it thirst?” as it would to question whether it is I who am feeling the thirst. Or take an
avowal with intentional content, such as “I’m really mad at you”. “I am mad at someone,
but is it you?” and “I’m in some state, but is it being mad?” would both be as odd as
“Someone is mad at you, but is it I?” when I simply avow being mad at you (as opposed to
making a conjecture about my own state of mind, for example) . . . By contrast, both “I am
doing something with my arm, but am I resting it on the chair?” and “I am resting my arm
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. BAR - ON ’ S NEO - EXPRESSIVISM 

on something, but is it a chair?” could make perfect sense even as I think, “I’m resting my
arm on the chair” in the normal way.
. . . . I dub this additional immunity “immunity to error through misascription”.
(2004: 193–4)

One might question whether ‘I am mad at someone, but is it you?’ is as out of


place as Bar-On claims. (Is it so odd to wonder about the object of one’s anger?23)
In any event, since immunity to error through misascription is the predicational
analog of immunity to error through misidentification, the former notion is just
as unexplanatory as the latter.
A person’s belief that a is F is subject to error through misascription (SETMa)
iff it is based on certain evidence. Although Bar-On does not give an explicit
characterization of the relevant sort of evidence, for present purposes we can take
it to be of the following sort: that a is G, and that if something is G, it is F. Since a
person’s belief is immune to error through misascription (IETMa) iff it is not
SETMa, a person’s belief that a is F is IETMa iff it is not based on certain
evidence; unlike SETMa, IETMa is therefore not a feature of a belief that bestows
any epistemological value on it. Consider again my belief that Kripke lectured on
identity at Harvard, and suppose that I never had any reason to hold it; epis-
temologically, my belief is no better than a guess. My belief is IETMa, yet it is
quite worthless, and is very likely false.
The combined immunity to error of avowals is supposed to explain why they
are “generally so resistant to ordinary epistemic assessments.” This explanandum
is made more precise later, when Bar-On identifies three features of avowals that
are supposedly explained, the last one having two components:

[a] [T]he fact that, when avowing, doubt as to whether one is indeed in the self-ascribed
state, or whether the state has the intentional content one assigns to it, seems entirely out
of place . . . [b] [the fact that] avowals seem [from the subject’s point of view] ‘groundless’
and to be issued with a distinctive effortlessness . . . [c1] the fact that we do not expect an
avower to have reasons or grounds for her avowal. [c2] We also do not stand ready to
correct or challenge an avowal. (2004: 310)

The example of my completely unjustified belief that Kripke lectured on identity


at Harvard shows that IETM and IETMa cannot jointly explain [a] or [c2].
My belief enjoys both kinds of security, but it would be in place for me to wonder
whether I’m right—especially so, since I cannot recall how my belief originated.
And of course my assertion about Kripke can easily be challenged.

23
Bar-On does not deny that I can assert that I am angry at someone, but I don’t know who
(2004: 117, 95). Her claim, rather, is that I cannot intelligibly assert this on the heels of asserting that
I’m angry at you.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 SOME RECENT APPROACHES

Matters might seem more promising with [b] and [c1]. If someone’s belief that a
is F is IETM, it is not based on certain evidence concerning the subject part;
if someone’s belief that a is F is IETMa, it is not based on certain evidence
concerning the ascriptive part. So, one might think, if someone’s belief is both
IETM and IETMa, it is not based on any evidence; in Bar-On’s terminology, it is
groundless. (Put in the terminology of section 3.2: knowledge that is IETM and
IETMa is unsupported.) And that would presumably explain why the avowal that a
is F seems to the avower not to be based on (her) evidence—after all, it isn’t based on
her evidence, and we often know whether a belief of ours is based on our evidence.
Likewise, provided that we have some inkling that the avower’s belief enjoys both
kinds of security, we would not expect her to have any reasons or grounds.
However, if someone’s belief is both IETM and IETMa, this does not imply
that it is groundless. The cat runs into the house; on this sole basis I rashly jump
to the conclusion that a certain dog, Fido, chased the cat. My conclusion is not
supported by other evidence: that Fido is in the neighborhood, that dogs tend to
chase cats, and so on. Still, my belief is based on some evidence, and so is not
groundless; yet it is both IETM and IETMa.

3.4.4 Neo-expressivism and the asymmetric presumption of truth


The main role of neo-expressivism is to answer the second part of question (i): to
explain why “we strongly presume” (2004: 311) that avowals are true. So: what is
neo-expressivism, exactly, and how does this explanation work?
Following Sellars 1969, Bar-On distinguishes three senses of ‘expression’:

EXPa the action sense: a person expresses a state of hers by intentionally doing something.
For example, . . . I intentionally give you a hug, or say “It’s so great to see you” . . .
EXPc the causal sense: an utterance or piece of behavior expresses an underlying state by
being the culmination of a causal process beginning with that state. For example, one’s
unintentional grimace . . . may express in the causal sense one’s pain . . .
EXPs the semantic sense: e.g., a sentence expresses an abstract proposition, thought, or
judgment by being a (conventional) representation of it.
(2004: 248; see also 216)24

The fact that the word ‘express’ is used freely throughout might raise a worry
about circularity, but it is clear that c-expression and s-expression, at least, are
supposed to be characterized reductively. A subject’s utterance or piece of

24
Notation changed slightly to conform to Bar-On’s later usage (in, e.g., Bar-On 2011).
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. BAR - ON ’ S NEO - EXPRESSIVISM 

behavior c-expresses his state M iff the utterance or behavior is the culmination
of a causal process beginning with M. And (ignoring context-dependence for
simplicity): a sentence s-expresses the proposition that p iff it (conventionally)
means that p. The situation is less clear with a-expression. Initially, Bar-On rests
with some examples and a necessary condition for a-expressing—it “requires the
performance of an intentional action” (216). Later, however, she tentatively
suggests necessary and sufficient conditions: “A person can be said to a-express
a mental state M through a bit of behavior, provided that the behavior is
an intentional act on the person’s part, and M is the reason (or ‘rational cause’)
for the act” (249). Notice that c-expression is factive, in the sense that if one
c-expresses a state M then it follows that one is in M. A-expression, it turns out
later, is not factive; I may a-express pain, for instance by saying ‘Ow!’ when the
dentist puts a fearsome-looking but in fact innocuous instrument in my mouth,
even though I am not in pain (322). Still, the dominant locution, ‘a-expressing
one’s mental state M,’ is factive: if I express my pain, as opposed to merely
expressing pain, then I am in pain (323).
One might a-express one’s state of excitement in a variety of ways: clapping, or
uttering ‘Yea!’/‘This is so great!’/‘I’m so excited’ (253). As Bar-On emphasizes,
although the behavior is quite different in each case—only the last is an assertion
that one is in the state—the processes that issue in the behavior are importantly
alike. An obvious point of comparison is that one might choose either of these
four ways to communicate the proposition that one is excited.
Clapping is not a “natural expression” of excitement—at least not in the way
that blushing is a natural expression of embarrassment. Clapping a-expresses
(and presumably also c-expresses) one’s excitement; blushing just c-expresses
one’s embarrassment. Although clapping may be fruitfully classified with the
avowal that one is excited, the differences between blushing and avowing that one
is embarrassed are perhaps more important than the similarities. In any event,
neo-expressivism is not committed to any strong thesis about the kinship
between avowals and involuntary natural expressions like blushing and wincing.
As Bar-On says, “for purposes of my Neo-Expressivist account of avowals’s secur-
ity, the reader need only allow that there is a legitimate sense in which subjects
can express their present mental states using a variety of acquired, convention-
governed expressive vehicles or means. The core claim is that in that sense an
avowal, too, can be said to be expressive of one’s present mental state” (265, first
emphasis added). The “legitimate sense” is evidently a-expression, so the core
claim of neo-expressivism is simply that avowals (and, less importantly, other
sorts of utterances) can be used to a-express the utterer’s present mental states: in
particular, the avowal ‘I am in M’ can be used to a-express one’s mental state M.
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

 SOME RECENT APPROACHES

Consider, to take Bar-On’s example, “Jenny’s avowal, ‘I really want the teddy!’”
(315), which s-expresses that she really wants the teddy. We “strongly presume,”
let us grant, that Jenny’s avowal is true. (Assume the fact that Jenny is in this state
does not entail the existence of the teddy—she merely wants relief from teddy-
lessness.) What is the explanation?
We are now in a position to offer an expressivist rendering of the presumption of truth
governing avowals. To regard a linguistic act as an avowal is to take it as an expression
rather than a mere report of the ascribed condition. It is to take the avowing subject to be
speaking directly from her condition, where the self-ascription tells us what condition is
to be ascribed to her. All that we as audience need to know to identify the condition being
expressed is linguistic uptake. Note, however, that insofar as we take the subject to be
expressing her condition (in the causal [c-expression] and action [a-expression] senses),
we take it that she is in the relevant condition—the condition that is semantically referred
to by the self-ascription, which is the very condition that would render the self-ascription
true. Thus, the judgment that is semantically expressed by her avowal is what we take to
be true, as long as we take Jenny to be expressing her condition . . .
An avowal is asymmetrically presumed to tell us the truth about the subject’s condition
insofar as it is taken to be the product of an expressive act of avowing—an act whose point
is to give vent to the subject’s present condition—and thus is seen as taking us directly to
the state it ascribes. (2004: 316–18)

In a nutshell, the explanation of why we strongly presume that Jenny’s assertion


is true is that we strongly presume that by uttering ‘I really want the teddy!’ she is
a-expressing (and/or c-expressing) her desire for the teddy, from which it trivially
follows that she really does want the teddy. Hence we strongly presume that she
really does want the teddy.
Conceding for the sake of the argument that this explanation is correct,
the bulge under the carpet has only been moved elsewhere. Suppose P trivially
entails Q, and that we presume both P and Q to be true. If our presuming Q to
be true is puzzling and needs explaining, then our presuming P to be true must be
puzzling and in need of explanation too. For it is not puzzling why we make
elementary inferences, and so if it is not puzzling why we presume P to be true, it
cannot be puzzling why we presume one of its trivial consequences to be true.
Applying this to the case at hand, our presuming that Jenny’s avowal is true may
have been explained, but the explanandum—our strongly presuming that by
uttering ‘I really want the teddy!’ Jenny is a-expressing (or c-expressing) her
desire for the teddy—is equally in need of explanation. And Bar-On does not
attempt to explain it.
Indeed, it is clear that the “core claim” of neo-expressivism (mentioned four
paragraphs back) is far too weak for any such explanation to be forthcoming. The
easiest way to see this is to note that only a terminological stipulation could stand
OUP CORRECTED PROOF – FINAL, 7/3/2018, SPi

. BAR - ON ’ S NEO - EXPRESSIVISM 

in the way of saying that one can a-express one’s state of seeing a red cardinal by
assertively uttering ‘I see a red cardinal.’ This might not count as an a-expression
for Bar-On because she denies that seeing a red cardinal is a mental state (16). But
even if it isn’t, such a terminological restriction prevents us from classifying
genuinely similar expressive acts together. With the restriction lifted, we can truly
say that subjects can a-express their present perceptual states (e.g., seeing a red
cardinal) “using a variety of acquired, convention-governed expressive vehicles
or means.” However—as Bar-On in effect points out—we do not strongly
presume that such self-ascriptions are a-expressions of one’s present perceptual
states: an overconfident birdwatcher might easily falsely assert that she sees a red
cardinal.
The three accounts just examined are each rich and ingenious, but they suffer
from a variety of defects. Next, Chapter 4 sets the stage for the alternative theory
to be defended in the rest of the book.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

4
The Puzzle of Transparency

We could adopt the following way of representing matters: If I, L.W., have


toothache, then that is expressed by means of the proposition ‘There is
toothache’.
Wittgenstein, Philosophical Remarks

[N]othing in the visual field allows you to infer that it is seen by an eye.
Wittgenstein, Tractatus

4.1 Introduction
This book began with the observation that beliefs and perceptions are—at least
on the face of it—transparent, in the following sense: one can discover that one
believes that it’s raining, or sees a blue mug, simply by considering, respectively,
the weather and the mug. And, as Chapter 3 discussed, the transparency of belief
is central to Moran’s account of self-knowledge as “self-constitution.”
But what is it, exactly, to discover that one believes that it’s raining “by
considering” the weather? If one discovers that there are mice in the kitchen by
considering the nibbled cheese, one has inferred that there are mice in the kitchen
from premises about the nibbled cheese. So an obvious and natural way of
cashing out the transparency of belief is that one’s knowledge that one believes
that it’s raining is the result of an inference from premises about the weather.
What are these premises? First, they have to include the premise that it’s raining.
If the result of my consideration of the weather is merely that it’s cloudy, or that
it’s probably raining, I am not going to take myself to believe that it’s raining. And
if the premises jointly entail that it is raining, I presumably will need to draw the
entailment if I am to take myself to believe that it’s raining, in which case I will
have reached this conclusion from the premise that it’s raining. Once I have the
premise that it’s raining in hand, are additional premises needed? Certainly none
about the weather, and any premises about another subject matter (for instance,
about myself) would at least to some extent involve rejecting the transparency of
belief. The inference, then, can be set out as a simple argument:
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. INTRODUCTION 

It’s raining.
I believe that it’s raining.

This is an instance of what Gallois calls the “doxastic schema” (1996: 47):

p
I believe that p

Since one can know that one has beliefs about oneself, the premise that p can be a
premise about oneself: for instance, that one was born in Vienna. More specif-
ically, it can be a premise about one’s current psychology: that one wants a beer,
or that one believes that Evans was a brilliant philosopher. But many of one’s
beliefs are about other matters, like the weather or the price of eggs. So in slogan
form we can say that this transparent account of the epistemology of belief
involves an inference from world to mind.
What about perception? If one discovers that one sees a blue mug by an
inference from world to mind, presumably the relevant worldly premises concern
the scene before the eyes. Could the inference be from the single premise that a
blue mug is present, with no particular assumptions about background know-
ledge needed? Clearly the account can’t be this simple, because often one knows
that a blue mug is present without having any tendency to conclude that one sees
a blue mug. For instance, if I look at a blue mug and then close my eyes, I know
that the mug is still there, but it’s obvious to me that I don’t see it. (This problem
will not be examined at length here; it is taken up later in Chapter 6.)
Are any other mental states transparent? As noted in section 1.2, knowledge
may be accepted as an addition, but that might seem to exhaust the field. For
instance, how could desires and intentions be transparent? One does not discover
that one wants to be unemployed by learning that one has just been fired, or that
one intends to go to prison by learning that one has just been sentenced to six
months. Bodily sensations, though, are much better candidates. Reasons for this
will be given in section 4.5.
Once the “transparency procedure” is clarified as an inference from world to
mind, a serious problem is evident: how can such an inference yield knowledge?
Since the relevant inference for belief has been stated precisely, the problem arises
with particular clarity here. Gallois puts it as follows:

Consider the following instance of the doxastic schema:


P1. There were dinosaurs in America.
C. I believe that there were dinosaurs in America.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

The argument from (P1) to (C) is not deductively valid. Nor is it inductively valid. I need
not be aware of any inductive correlation between there being dinosaurs in America and
my believing there are in order for me to legitimately infer (C) from (P1). Finally, the
inference of (C) from (P1) is not validated as an example of inference to the best
explanation. The best explanation for their having been dinosaurs in America is not
that I believe that there were. (1996: 47; premises re-lettered)

A distracting feature of this example is that Gallois’s actual inference from (P1) to
(C) is quite commendable. Gallois is a professor of philosophy and (unsurpris-
ingly) he knows this. He also knows that college teachers, even professors of
philosophy, know—and hence believe—basic facts about the geographic distribu-
tion of dinosaurs.1 Further, he knows that there were dinosaurs in America.
Evidence of this sort in Gallois’s possession supports the claim that he believes
that there were dinosaurs in America, just as it supports the claim that his colleague
Professor Van Gulick believes that there were dinosaurs in America. Still, as Gallois
in effect says, this does not support the doxastic schema in general. The defense just
given of Gallois’s actual inference from (P1) to (C) plainly does not work if we
replace (P1) with a premise not commonly known by philosophy professors:
P1*. Saurophaganax is the state fossil of Oklahoma.
C*. I believe that Saurophaganax is the state fossil of Oklahoma.
The apparent worthlessness of the doxastic schema can be made more vivid by
noting that the intuitive case for the transparency of belief is indifferent as to
whether the belief in question is true or not. Suppose I consider whether I believe
that the population of Cambridge, MA, is less than 100,000. I turn my attention
outward, to the fair city of Cambridge and its diverse populace. Since (as I would
put it) the population of Cambridge is less than 100,000, I arrive at the conclusion
that I do believe that the population of Cambridge is less than 100,000. If this
process involves reasoning in accord with the doxastic schema, then I have
inferred that I believe that the population of Cambridge is less than 100,000
from the premise that it is less than 100,000. As it happens, the population of
Cambridge is over 100,000. Demographic facts are poor evidence for my psych-
ology in any case, but demographic non-facts are no evidence at all. One apparent
lesson from post-Gettier epistemology is that knowledge cannot be based on a
“false step,” or rely on a “false lemma.” Suppose I conclude that either Jones owns
a Ford or Brown is in Barcelona from the ostensible fact that Jones owns a Ford.

1
We will assume throughout that knowledge entails belief.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. GALLOIS ON THE PUZZLE 

If my conclusion is true because Brown is in Barcelona, but Jones does not own a
Ford, then my conclusion does not amount to knowledge.2 So how can reasoning
in accord with the doxastic schema allow me to know that I believe that the
population of Cambridge is less than 100,000?3
It seems initially plausible that beliefs and perceptions are transparent. How-
ever, it is hard to see how one can sometimes come to know that one is in such
states by considering (typically non-psychological) topics that have little or no
bearing on whether one is in the state. The puzzle of transparency, mentioned
briefly in Chapter 1, is the problem of resolving this tension. This chapter
examines the puzzle more thoroughly.
There are three broad ways of responding. First, one might attempt to under-
mine the argument that the transparency procedure is not knowledge-conducive—
in the terminology of Kripke 1982: 66, that would be to give a “straight solution” to
the puzzle. Second, one might admit that although we do follow the transparency
procedure, it does not yield knowledge. That would be to give a “skeptical
solution”: a substantial amount of our so-called self-knowledge turns out to be
nothing of the kind. Third, one might deny that we follow the transparency
procedure (except, perhaps, infrequently and in special situations). That would
be to reject the puzzle as based on a false presupposition.
Gallois and Moran both respond to the puzzle of transparency, and although
they have no sympathy with either of the last two responses, they don’t straight-
forwardly claim to give a straight solution. Still, in ways that will become clear,
they do try to say something in favor of the transparency procedure: their
arguments are examined in the following two sections (4.2 and 4.3). Section 4.4
turns to Dretske’s important work on the puzzle as it arises for perception, and
his tentative endorsement of a skeptical solution. Section 4.5 discusses the puzzle
of transparency in the special case of sensations. Sections 4.6 and 4.7 compare the
puzzle of transparency with, respectively, the problem of other minds, as discussed
by Kripke in Wittgenstein on Rules and Private Language, and Hume’s remarks
on the self in the Treatise. Section 4.8 notes Russell’s anticipation of the puzzle.

4.2 Gallois on the puzzle


Gallois’s own response to the puzzle of transparency is somewhat indirect. He
does not try to rebut arguments that reasoning in accord with the doxastic

2
That the examples in Gettier 1963 involved reasoning from a false premise was pointed out in
Clark 1963.
3
This problem was also noted by Hellie 2007a.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

schema cannot lead to knowledge. Rather, he argues that unless I reason in


accord with the doxastic schema, “I will form a deeply irrational view of my
non-doxastic world. Applying to myself the concept of belief, where that appli-
cation is not warranted by evidence, allows me to form a more rational picture of
the world” (1996: 76).
Since this conclusion is consistent with the claim that reasoning in accord with
the doxastic schema is not knowledge-conducive, Gallois’s argument leaves
open the apocalyptic possibility that belief without knowledge is the price of
rationality.4 Despite its limitations, the argument is certainly of interest; does it
succeed?
Gallois’s argument begins with the following case. Suppose that I am “self-
blind” with respect to my beliefs (see section 2.2.9): I am perfectly rational, have
the concept of belief, and my opinions about what I believe are formed solely
using the third-person methodology I employ to discover what someone else
believes. I do not, then, reason in accord with the doxastic schema.
Now suppose “I start out as a creationist, and end up being converted to the
theory of evolution” (Gallois 1996: 76). I start out believing that p (say, that “life
was created [in] 4,004 BC”) and end up believing that not-p. Suppose further that
I do not have enough evidence for the third-person methodology to reveal either
belief. So, “I cannot think of myself as changing my beliefs” (76). The crucial step
in the argument is the next, where Gallois argues that this commits me to a
“deeply irrational view of my non-doxastic world”:
Yesterday I could not tell that I held the creationist belief about life on Earth. How, then,
will I recollect yesterday’s belief about the age of life? Not like this. Yesterday I believed
that life was six thousand years old. After all, I have never attributed such a belief to
myself. Instead, I will recollect my believing yesterday that life was six thousand years old
like this. Yesterday life was six thousand years old. (1996: 76)

According to Gallois, I believe that life was not created in 4,004 BC, and I also
believe that yesterday life was created in 4,004 BC—more concisely, since ‘yesterday’
is vacuous, that life was created in 4,004 BC. This is not merely to have a
metaphysically strange world view, but to have contradictory beliefs. One way—
and, we may grant, the only way—of avoiding this result is to reason in accord with
the doxastic schema, thus enabling me to recollect that yesterday I believed that life
was created in 4,004 BC.
Gallois then gives other examples (75–6), again arguing that if I do not employ
the transparent inference in these special cases, then my view is “deeply

4
Nonetheless, Gallois thinks that reasoning in this way is knowledge-conducive (1996: 3).
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. MORAN ON THE PUZZLE 

irrational.” He then gives a complicated argument for generalizing this result


across the board (ch. 5).
The steps in Gallois’s overall argument each deserve discussion in their own
right, but for present purposes we can just focus on his treatment of the first
example.5 If a rational animal yesterday believes that p, and today acquires
compelling evidence that not-p, it changes its mind. The belief that p is lost,
and the belief that not-p takes its place. Suppose for the moment that the animal
in question has no concept of belief. How will it “recollect yesterday’s belief?”
Ex hypothesi, it will not remember that yesterday it believed that p. But neither
will it “recollect” that yesterday, p. The animal will simply believe that not-p.
Adding the assumption that the animal possesses the concept of belief, and is
able to attribute beliefs to itself and others via third-person means, makes no
apparent difference. If the animal has no evidence that yesterday it believed that
p, then it will change its mind without realizing that it does so. It will not
“recollect” that yesterday, p, but this does not mean that there is anything
wrong with its memory. On the contrary, memory would be useless if storing
the fact that not-p in memory did not prevent one from continuing to “remem-
ber” that p. Gallois’s argument thus has a false premise: I will not “recollect
yesterday’s belief about the age of life” at all.
Gallois’s modest defense of the doxastic schema is unsuccessful. Does Moran
do any better?

4.3 Moran on the puzzle


As Chapter 3 discussed, Moran’s account of knowledge of one’s beliefs appeals to
transparency. He endorses the “Transparency Condition, which tells us that I can
answer a question concerning my belief about, e.g. what happened last week,
not by considering the facts about myself as a believer but by considering what
happened last week” (2004: 457). Moran is less explicit than Gallois about what
this amounts to, and in fact denies at various points that it involves an inference
from—to take Moran’s example—a premise concerning what happened last
week.6 But his denial may at least partly be a matter of terminology. If the
“inference” corresponding to the doxastic schema generates knowledge, it can’t
be because the premise is good evidence for the conclusion. In that respect it is
quite unlike paradigmatic cases of good inference; accordingly, one might think it

5
For more on Gallois’s argument, see Brueckner 1999, Gertler 2010: ch. 6.
6
For example, Moran endorses the claim that one knows what one believes “immediately” (2001:
10–12), and glosses this in part as “involv[ing] no inference from anything else” (90).
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

unhelpful to use the term. Moran does not agree with Shoemaker (e.g., 1988) and
Boyle 2011 that a rational agent who believes that p thereby knows that she
believes it: the transparency procedure, on Moran’s conception, produces know-
ledge of what one believes.7 One might well believe that p and not realize that one
does, because the question has never arisen. So Moran thinks that the transpar-
ency procedure involves some sort of “transition” from—as I, the subject, would
put it—the fact that p to the conclusion that I believe that p. The doxastic schema
captures this transition, so Moran should not think it entirely misleading.
The puzzle of transparency is not prominent in Authority and Estrangement
but is emphasized later, when Moran replies to a variety of commentators:
[H]ow could it be even permissible, let alone some kind of normative requirement, for
someone to answer a question concerning what some person’s belief or other attitude is
not by consideration of the fact about that person but by consideration of the facts about
last week? Those are two quite distinct matters we might be asking about, the events of last
week, and some person’s current state of mind. (2004: 457)

More generally:
[H]ow can a question referring to a matter of empirical psychological fact about a
particular person be legitimately answered without appeal to the evidence about that
person, but rather by appeal to a quite independent body of evidence? (Moran 2003: 413)

Moran’s answer appeals to a crucial assumption that he thinks we all make:


[M]y thought is that it is only because I assume that what I actually believe about X can be
determined, made true by, my reflection on X itself, that I have the right to answer a
question about my belief in a way that respects the Transparency Condition.
(2003: 405–6; emphasis added)8

Absent this assumption,


[the person] would have to think that, even though he is considering the reasons in favor
of P and coming to some conclusion about it, something other than that consideration is
determining what his actual belief about P is. That can happen, of course. It can happen
that processes having nothing to do with deliberation determine the beliefs I have that
never become objects of assessment or critical reflection. And in certain situations, with
respect to certain particularly fraught subject matters, it can happen that when I do
deliberate about the question I either fail to arrive at a stable conclusion, or the conclusion
I arrive at explicitly is one that, for some reason, I suspect may not be my genuine or
abiding belief about the matter. In that sort of case, all the deliberating or critical reflection

7
See Moran 2004: 467–8.
8
This idea is present in Authority and Estrangement (2001: 66–7), although stated more
compactly.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. MORAN ON THE PUZZLE 

may be so much rationalization, a well-meaning story I tell myself that has little or
nothing to do with what my actual belief is. This is a familiar enough situation of
compromised rationality. But as familiar as it is, I don’t think it should distract us from
the pervasive extent to which we take it for granted that the conclusions we arrive at in the
course of deliberation really do represent the beliefs we have about the matters in question,
and I don’t see how anything like ordinary argument or deliberation could have anything
like the role in our lives that they do if this were not the case. The aim of deliberation is to
fix one’s belief or intention, and it could not do so if in general the conclusion of one’s
deliberation . . . left it as an open question, one needing to be answered in some other way,
what one’s actual belief about the matter is. And if there is no additional step to be made
here, if I am entitled to assume that the conclusion of my reasoning tells me what my
belief . . . is, then I think we have the form of the only kind of vindication that the
Transparency Condition could have. (2004: 466; emphasis added)9

As formulated at the end of this passage, the crucial assumption is this: “the
conclusion of my reasoning tells me what my belief is.” According to Moran, it is
because we make this assumption that the puzzle of transparency is solved, or at
least defanged.
What does Moran mean by ‘conclusion’? Suppose I do some reasoning and
thereby acquire the belief that my colleague Bob is in his office. One could put
this by saying that the conclusion of my reasoning is that Bob is in his office. That
is, the conclusion of my reasoning is precisely the proposition I end up believing
as a result of the reasoning. But this is not what Moran means, because then it
would be trivial that conclusions are believed, and plainly Moran does not take it
to be trivial. Indeed, he suggests that in a situation of “compromised rationality”
my conclusion “may not be my genuine or abiding belief about the matter.”10 For
example, as a result of thinking about my tiny group of followers on Twitter,
the striking absence of comments on my blog posts, and the like, I assert dolefully
to a friend that I am a boring person. That is what the evidence points to, but
was my assertion really sincere? Perhaps this is just false modesty, and I was
expecting my friend to vigorously disagree. After all, I continue to tweet and blog
interminably about the issues of the day. If we take the “conclusion” of my
reasoning to be that I am a boring person, then we can say that this may be a
case where the conclusion of my reasoning is not something I believe. So, to a first
approximation, the “conclusion” of a piece of reasoning, in Moran’s intended
sense, is a proposition I would be prepared to assert (perhaps insincerely), as the
result of that reasoning.

9
The unelided passage concerns intention as well as belief.
10
Another sort of case in which one’s “conclusion” may not be believed is when one reasons
under a supposition (at least on the orthodox view of supposition: see fn. 21 of Chapter 1). However,
this is not what Moran has in mind.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

What does Moran mean by saying that the conclusion of my reasoning “tells
me what my belief is”? Earlier in the above passage he puts the crucial assumption
this way: “the conclusions we arrive at in the course of deliberation really do
represent the beliefs we have about the matters in question.” Evidently a conclu-
sion “represents” a belief I have just in case I believe the conclusion. The
assumption, then, is this: I believe the conclusions I arrive at in the course of
deliberation. We can accordingly dispense with ‘tells me what my belief is,’ and
rewrite the formulation at the end of the passage thus: I assume that I believe the
conclusion of my reasoning.
Now suppose I do some reasoning about whether the pub is open, and
reach the happy conclusion (in Moran’s sense) that it is open. The crucial
assumption—that I believe the conclusion of my reasoning—is supposed to
underwrite my affirmative answer to the question Do I believe that the pub is
open?. We can distinguish two relevant readings of the assumption, depending on
whether the definite description takes wide scope with respect to ‘assumes.’ On
the wide-scope reading, the conclusion of my reasoning is such that I assume:
I believe it.
That is, in this case, I assume:
AW I believe that the pub is open.

And on the narrow-scope reading, I assume:


AN I believe the conclusion of my reasoning about options for beer,
whatever that is.
If I am entitled to (AW), then this does indeed vindicate the Transparency
Condition, and so solves the puzzle of transparency. But of course this is no
advance at all, since the question of whether I am entitled to (AW) is precisely
what is at issue.
What about the second reading of the assumption? This will do me no good
unless I know the conclusion of my reasoning about whether the pub is open.
And if I do know what the conclusion is, I can argue straightforwardly as follows:
P1. The conclusion of my reasoning about whether the pub is open is that it
is open.
P2. I believe the conclusion of my reasoning whether the pub is open.
C. I believe that the pub is open.
However, this does not seem to be a particularly attractive account of how I know
that I believe that the pub is open. For one thing, it assumes that I know what my
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. DRETSKE ON THE PUZZLE 

conclusion (in Moran’s sense) is. How do I know that? This seems no less
puzzling than the question of how I know what I believe. More importantly,
the epistemology on offer is not at all transparent, and so is clearly not Moran’s.
(Recall this quote from Moran in section 3.3: “I can report on my belief about
X by considering (nothing but) X itself” (emphasis changed).) (Cf. Shoemaker
2003: 401.)
Neither Gallois nor Moran has ameliorated, let alone solved, the puzzle of
transparency for belief; let us now turn to Dretske’s classic discussion of the
puzzle as it arises for perception.

4.4 Dretske on the puzzle


The title of Dretske’s paper “How do you know you are not a zombie?” serves to
dramatize the puzzle of transparency for perception.11 But the mention of
“zombies” might mislead. In more-or-less standard usage, zombies are creatures
who are physically exactly alike awake and alert human beings, but who are not
“phenomenally conscious”—there is “nothing it is like” to be a zombie. Zombies
are frequently presumed to have a typical package of intentional mental states.
So zombies believe that it’s raining, and see ducks, although of course their
perceptual states are devoid of any “qualia.” For those who think that this
conventional sort of zombie could have existed, the question How do you know
you are not a zombie? can seem pressing. After all, zombies are (arguably) firmly
convinced that they aren’t zombies—just like us.12
Importantly, Dretskean zombies are not the standard sort, and epistemological
issues about qualia are only of peripheral relevance to Dretske’s concerns. In
Dretske’s usage, “zombies [are] human-like creatures who are not conscious and,
therefore, not conscious of anything” (2003b: 9, n. 1). A Dretskean zombie is
simply a superficial human look-alike who behaves in humanlike ways, and who
lacks intentional states; in particular, a Dretskean zombie sees nothing and
believes nothing. One day the Sony Corporation will produce mindless robots
to help around the house, so sophisticated that the casual observer will take them
to be normal humans—Dretskean zombies are rather like that. The possibility of
standard zombies is controversial; in contrast, only a hard-line behaviorist would
deny that Dretskean zombies could have existed.13

11
See also Shoemaker 1963, especially 83–4.
12
For extensive discussion, see Chalmers 1996: ch. 5.
13
Dretske briefly alludes to “some readers who doubt that [Dretskean zombies] are possible”
(2003b: 10, n. 1), so to be on the safe side the explanation of Dretskean zombies in the text should be
viewed as a friendly elaboration or amendment.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

Dretske states the problem as follows:


In normal (i.e. veridical) perception, then, the objects you are aware of are objective,
mind-independent objects. They exist whether or not you experience them . . . Everything
you are aware of would be the same if you were a zombie. In having perceptual experience,
then, nothing distinguishes your world, the world you experience, from a zombie’s. This
being so, what is it about this world that tells you that, unlike a zombie, you experience it?
What is it that you are aware of that indicates that you are aware of it?
(2003b: 1, note omitted)

One of Dretske’s examples will be useful. Suppose I see a bottle of beer in the
fridge. The light is good, the bottle is in plain view, and my ability to recognize
bottles of beer (and fridge interiors) is exemplary. I thereby come to know that
there is beer in the fridge. Now the beer has no special connection to visual
experience—provided I can’t see myself reflected in the bottle, the beer does not
“indicate that I am aware of it.” The properties of the beer that I can detect by
sight are properties that the beer has when I am not seeing it; put more generally,
the world as revealed by vision does not have vision in it. Thus the presence of the
beer does not favor the hypothesis that I see it over the “skeptical hypothesis” that
I am a (Dretskean) zombie, and hence do not see it. The evidence provided by
vision would be exactly the same even if I were a zombie. That is, I can’t come to
know that I see beer solely on the basis of facts about the scene before my eyes.
And yet: that’s how I do it! I turn my attention to the bottle and correctly
conclude that I see beer. This is an instance of the puzzle of transparency for
perception.
Dretske also briefly mentions the puzzle of transparency for belief:

Our question . . . is a question about how one gets from what one thinks—that there is beer
in the fridge—to a fact about oneself—that one thinks there is beer in the fridge. What
you see—beer in the fridge—doesn’t tell you that you see it, and what you think—that
there is beer in the fridge—doesn’t tell you that you think it either. (2003b: 2)

However, Dretske does not canvass any answers to the question of “how one gets
from what one thinks” to the fact that one thinks it, concentrating instead on the
puzzle of transparency for perception.
If perception is transparent, I know that I see beer merely by attending to the
bottle in the fridge. The problem is that the facts about the beer gathered by using
my eyes are not good evidence that I see it. Notice, though, that vision also gives
me information concerning the spatial relation between the beer and myself,
namely that I am facing the beer. Now this doesn’t help much on its own—I can
easily face the beer and not see it for the simple reason that I might have my eyes
closed. But what if we add in evidence (provided by proprioception or
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. DRETSKE ON THE PUZZLE 

kinesthesia) about the disposition of my eyelids, and other relevant bodily parts?
Can’t I then know that I see beer? Admittedly, vision is not then the only source
of my evidence, but this revision preserves the basic idea that one knows what
one sees by attending to one’s environment, broadly construed.
Dretske in effect considers the revision and dismisses it in a few sentences:
“Zombies, after all, have bodies too . . . A zombie’s arms and legs, just like
ours, occupy positions. Their muscles get fatigued” (2003b: 2). In the skeptical
scenario, the zombie’s body also faces the beer, the zombie’s eyes are open, etc.
This additional evidence does not discriminate, then, between the scenario in
which I see beer and the scenario in which I am a (Dretskean) zombie.
But even by Dretske’s lights this is too quick. His question is: “what is it about
this world [of ‘objective, mind-independent objects’] that tells you that, unlike a
zombie, you experience it?” And his dismissal of the present proposal gives the
impression that an answer needs to be absolutely skeptic-proof, displaying a body
of evidence gained through perception that entails that I see beer. In fact, Dretske
is not setting the bar so high: the challenge he poses is to explain how I know that
I see beer by observing the environment (including, perhaps, my body). And the
suggestion about proprioception and my relation to the beer is, in effect, the idea
that I find out that I see beer on the basis of the sort of evidence that would
support the claim that someone else sees beer. (Some of this evidence comes from
different sources: I know that my eyes are open by proprioception, but I know
that another’s eyes are open by vision.) I can come to know that someone else
sees beer by noting that there is a suitably placed and salient bottle of beer, that
the person’s eyes are open and converge on the bottle, and the like.14 Or so we
may assume—skepticism about other minds is not the issue. Hence the problem
with the present suggestion is not that it fails to supply a way of knowing that
I see beer.
The problem, rather, is that I plainly do not need to rely on supplementary
proprioceptive evidence to know that I see beer. Suppose I am recovering from
surgery and have lost my proprioceptive sense. The room is completely dark, and
the surgeon turns on a small distant light, asking if I see a bright spot. Since I have
no independent means of knowing that my eyes are open, or that they are
converging or diverging, on the present suggestion I do not know that I see a
bright spot because I lack adequate evidence. But it seems most implausible that
the surgeon’s question is a difficult one to answer! I know that I see a bright spot,
just as I do in the normal case.

14
Uncontroversially, I might sometimes have good evidence from vision that I see beer (perhaps
I see myself in a mirror, staring longingly at a six-pack). But this is not a typical case.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

The point can be reinforced by considering other modalities, which one would
expect to have basically the same first-personal epistemology as vision. Suppose
I hear the distinctive sound of a glass bottle breaking, and cannot identify its
direction. I can know that I hear the sound without checking that my ears aren’t
blocked, or gathering further evidence about the location of the bottle and the
orientation of my ears.
And in any case, the proposal is a nonstarter for the puzzle of transparency
in the case of belief (or knowledge). For example, I can (apparently) know
that I believe that Dretske wrote Naturalizing the Mind merely by recalling the
non-psychological fact that Dretske wrote Naturalizing the Mind. The kind of
evidence that I need in the case of another’s belief is not necessary in the slightest.
I do not need to appeal to supplementary evidence about myself—for instance,
that I have on occasion uttered ‘Dretske wrote Naturalizing the Mind,’ that I have
studied Dretske’s works, that philosophy book catalogs are stacked by my toilet,
and so forth.15
Dretske considers and rejects a variety of other attempts to solve the puzzle.
We are left, he writes, “with our original question: how do you know you are not a
zombie?”, and the paper ends on a rather ominous note:
To insist that we know [we are not zombies] despite there being no identifiable way we
know it is not very helpful. We can’t do epistemology by stamping our feet. Skeptical
suspicions are, I think, rightly aroused by this result. Maybe our conviction that we know,
in a direct and authoritative way, that we are conscious is simply a confusion of what we
are aware of with our awareness of it (see [Dretske 2003a]). (9; note omitted)

At the end of the paper Dretske refers to, these “skeptical suspicions” are
elaborated:
[H]ow do I discover that I think and feel, that I am not a zombie? I am tempted to reply,
I learned this the same way I found out a lot of other things—from my mother. She told
me. I told her what I thought and experienced, but she told me that I thought and
experienced those things. Children make judgments about a variety of things before they
understand the difference between how they judge and see the world to be (e.g., that there
is candy in box A) and their judging and seeing it to be that way. Three-year-olds know,
and they are able to tell you, authoritatively, what they think and see (e.g. that there are
cookies in the jar, that Daddy is home, etc.), before they know, before they even
understand, that this is something they think and see. Somehow they learn they can

15
This failure to generalize to the belief case afflicts some of the other proposals Dretske
discusses, including the second one discussed below, and the proposal that one can tell that one is
seeing by noting one has a “point of view,” which zombies do not have (Dretske 2003b: 2–3). See also
fn. 21 below.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. THE PUZZLE OF TRANSPARENCY FOR SENSATIONS 

preface expressions of what they think (Daddy is home) with the words “I think,” words
that (somewhat magically) shelter them from certain forms of correction or criticism.
Parents may not actually tell their children that they think—for the children wouldn’t
understand them if they did—but they do teach them things (language must be one of
them) that, in the end, tell them they think. Children are, at the age of two or three,
experts on what they think and feel. They have to learn—if not from their mothers, then
from somebody else—that they think and feel these things. Nonhuman animals never
learn these things. (Dretske 2003a: 140–1, note omitted)16

This passage might seem to give a solution to the puzzle of transparency because
Dretske is at least suggesting one way in which one could learn that one is not a
zombie. If my mother tells me that I saw a Saurophaganax fossil at the museum
then what she tells me obviously entails that I am not a zombie. Since there’s no
special barrier to knowing what she tells me, this is one way I can learn I am not a
zombie.
Given Dretske’s general enthusiasm for transparency, though, this amounts to
a skeptical solution. I may learn from my mother that I am not a zombie, but one
learns little about one’s current mental life from others. Typical self-ascriptions
of beliefs and perceptions are not made on the basis of testimony. Dretske seems
at least tempted by the idea that they are the result of a transparent inference
from world to mind, in which case, by Dretske’s lights, they do not amount to
knowledge.
But denying that my transparency-based judgment that I believe that Fred was
educated in Minnesota amounts to knowledge is not very plausible. After all, I do
believe that Fred was educated in Minnesota—you can check this for yourself by
sending me an email. And even if one is prepared to admit that these judgments
are in fact ill-founded, Dretske does not offer any explanation of why we are
prone to such serious errors.
Neither the half-hearted straight solutions of Gallois and Moran, nor Dretske’s
skeptical solution, are very appealing. Let us now make matters worse by extend-
ing the puzzle to sensations.

4.5 The puzzle of transparency for sensations


According to Bar-On, “Phenomenal avowals do not seem good candidates for
the application of the transparency method” (2004: 116). An example of a

16
The omitted note characterizes Evans’ and Shoemaker’s accounts of self-knowledge as “attract-
ive,” but Dretske can’t have it both ways.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

“phenomenal avowal” is an assertive utterance of ‘I have a toothache,’ about


which Bar-On writes:
Not only do I not normally tell that I have a terrible toothache by attending to the
condition of my tooth, but if I were to consider the condition of my tooth directly, it
hardly seems that the security of my findings would suffice to ground the security that my
pronouncement “I have a terrible toothache” seems to have. (2004: 118)

This passage makes two points. First, there is no parallel to Evans’ “eye directed
outward” in the case of a toothache: I do not find out that I have a toothache “by
attending to the condition of my tooth,” as I find out that I see a duck by
attending to the duck.
The second point can be usefully recast as an argument for the first. I am not
particularly good at identifying ducks—I will defer to ornithologists on this
matter. Hence, if I assert that I see a duck on the ostensible basis that there is a
duck in plain view, I may well be wrong, and should allow the experts to correct
my pronouncement. No problem here, because such pronouncements plainly
enjoy no impressive kind of security. I am also not particularly good at finding
out the condition of my teeth—I will defer to my dentist on this matter.
However—and here there is a contrast with the state of seeing a duck—I am
much better at finding out that I have a toothache. Provided we understand a
“toothache” to be a distinctive kind of sensation, and as not entailing the
existence of teeth, then I am the authority, not my dentist. Hence my way of
finding out that I have a toothache can hardly be to concentrate on the condition
of my teeth, else my dentist would be able to correct my pronouncements about
my own sensations.
To assess these points, some distinctions and clarifications are needed. Imagine
you have a toothache. An unpleasant and distinctive disturbance is occurring,
and moreover one that has a felt location: the disturbance is felt as occurring in a
region of your jaw. That disturbance is the toothache. Guided by its felt location,
you may indicate the offending tooth to your dentist. The toothache and its
features can be the object of attention: you may attend to its throbbing quality, for
instance.
Your awareness or experience of the toothache should not be confused with
the toothache itself, any more than one’s awareness of a duck should be confused
with the duck. Although the latter distinction is easily made, a long and distin-
guished tradition in philosophy, of which Reid is the chief representative, refuses
to extend it to sensations:

Sensation is a name given by philosophers to an act of mind which may be distinguished


from all others by this, that it hath no object distinct from the act itself. Pain of every kind
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. THE PUZZLE OF TRANSPARENCY FOR SENSATIONS 

is an uneasy sensation. When I am pained, I cannot say that the pain I feel is one thing,
and that my feeling it is another thing. They are one and the same thing, and cannot
be disjoined, even in imagination. Pain, when it is not felt, has no existence.
(Reid 1785/1941: 18–19)17

The felt location of pain highlights the defects of this view. “The pain I feel,” or
the toothache, is in my jaw—at least, that’s what I tell my dentist, and in a normal
case of toothache there is no need to claim that appearances are deceptive. On the
other hand, “my feeling it” is (presumably) not in my jaw, any more than my
awareness of pressure on my back is itself located on my back. Therefore the pain
I feel and my feeling it are not the same thing.
Although we say that toothaches, pains, sensations, and feelings are located in
teeth, many philosophers refuse to take this talk seriously. Toothaches, pains, and
so on, they say, if they are anywhere, are really “in the mind,” wherever that is. As
expressed by a modern materialist, this becomes the straightforward spatial claim
that toothaches are in the brain, not in the jaw; hence the title of Smart’s classic
paper, “Sensations and brain processes” (1959). For the moment, set this revi-
sionary view aside (it will be taken up again in section 6.3).
Return now to Bar-On’s first point, that “I do not normally tell that I have a
terrible toothache by attending to the condition of my tooth.” That is partly right:
the unpleasant disturbance that occupies my attention is not a “condition of my
tooth,” but rather an event that seems to be occurring in the vicinity. However, if
Bar-On’s first point is formulated with this correction made, then it loses all
plausibility. If it is conceded that the transparency procedure applies to seeing
ducks, there is no reason to treat feeling toothaches differently. One knows one
sees a duck by attending to the duck; likewise, one knows one feels a toothache
(the unpleasant disturbance in one’s jaw) by attending to the toothache.18
What about Bar-On’s second point, that the transparency procedure would
leave one vulnerable to correction by an examination of one’s teeth? If to feel a
toothache is to be aware of a certain unpleasant disturbance then—let us grant—
one’s claim that one feels a toothache is in principle hostage to dental investiga-
tion. But since the nature of toothaches is quite debatable, it is unclear what sort
of discovery could show that the ostensible toothache did not exist. That defen-
sive reply is enough for the present chapter; this sort of worry will be treated more
extensively in Chapter 6.

17
This view is not confined to philosophers: “physical pain is exceptional in the whole fabric of
psychic, somatic, and perceptual states for being one that has no object . . . desire is desire of x, fear is
fear of y, hunger is hunger for z; but pain is not ‘of ’ or ‘for’ anything—it is itself alone” (Scarry 1985:
161–2).
18
Cf. Gordon 1996: 16–17.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

In any event, it might be thought that however things stand with the puzzle of
transparency in general, sensations do not present much difficulty. Toothaches
and ducks are very different kinds of animal. In particular, ducks exist unper-
ceived, but it is a familiar claim that toothaches are necessarily felt, and my
toothaches are necessarily felt by me. If that is right, the puzzle of transparency is
readily solved in the case of toothaches. That a certain toothache exists entails
that I feel it. This is an example of the first kind of response—a “straight
solution”—to the puzzle of transparency.
In “How do you know you are not a zombie?”, Dretske discusses this response.
He first makes the (major) concession “that we are necessarily conscious of our
own pains and tickles” (2003b: 4); even with that granted, he argues, the objection
just invites a restatement of the puzzle. Let protopain be “something . . . that has
all the properties you are aware of when you experience pain except for the
relational one of your being aware of it. Protopain is what you have left when you
subtract your awareness of pain” (5).19 Protopain is exactly like pain, except that
protopain can exist in the absence of awareness. Pain does not occur in a zombie’s
tooth, but protopain does. Now the puzzle is this: “how do you know it is pain
you feel and not merely protopain?” (5). “What is it that tells you that what you
feel in your tooth is something you feel in your tooth, something you are actually
aware of, and not the sort of thing that can occur, without being felt, in the tooth
of a zombie?” (5).
Dretske offers an analogy. A crock is a rock that “you (not just anyone, but you
in particular) see, [a rock] that you are therefore (visually) aware of . . . So when
you see a crock, there is something you are aware of—a crock—that depends for
its existence on your being aware of it” (5). The puzzle of transparency is not
solved by saying that you know you are not a zombie because you are aware of
crocks, and crocks would not exist if you were a zombie. The reason is that
“[c]rocks, after all, look much the same as—in fact, they are absolutely indistin-
guishable from—rocks” (5). And since crocks are no help, for the same reason
neither are toothaches.
This might seem a compelling analogy, but on closer examination it undermines
Dretske’s argument. What is a crock, exactly? The quotation at the start of the
previous paragraph suggests two different answers. The first part of the quotation
(before the ellipsis) suggests this: x is a crock iff x is a rock and you see x. And the
second suggests this: x is a crock iff x is a rock and you see x and x is essentially seen

19
As Lycan points out (2003: 23), Dretske’s first characterization in this quotation is weaker than
his second, since the first is silent on whether properties of pain one is unaware of are shared by
protopain. For present purposes, this difference is unimportant.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. THE PUZZLE OF TRANSPARENCY FOR SENSATIONS 

by you. Let us distinguish between the two with subscripts: crock1s are rocks that
you see, crock2s are rocks that would not exist if you weren’t seeing them.
Since (we may suppose) you have seen rocks from time to time, the occasional
existence of crock1s is uncontroversial. And Dretske seems to have crock1s in
mind: he says that if you closed your eyes, crocks would vanish “in the same
sense that husbands and wives would vanish if marital relations were banished”
(n. 9, 10). After a divorce, husbands and wives do not vanish in the sense of
ceasing to exist—the former husband and wife are still around, it is simply that
they are no longer a husband or a wife. However, if we interpret ‘crock’ as crock1,
it is not true that “when you see a crock, there is something you are aware
of . . . that depends for its existence on your being aware of it.”
Dretske is quite correct in claiming that crock1s cannot solve the problem of
how you know that you are not a zombie. True, if a rock before you is a crock1,
then you are not a zombie. But finding out that a rock is a crock1 simply is finding
out that you see the rock. Hence, if toothaches are supposed to be like crock1s,
then the puzzle of transparency is no easier for sensations than it is for rocks.
But it would be much more natural for Dretske’s opponent to insist that
toothaches are like crock2s. When I feel a toothache, she may say, I am aware
of something that is essentially felt by me: the toothache would not exist were
I not feeling it. After all, the opponent’s point is that pains are very different from
rocks. If pains are like crock1s this is not so—a crock1 is nothing more than a
boring old rock, with the usual petrological essence.
The existence of crock2s is controversial; the existence of pains, conceived
along analogous lines, is much less so. And it is clearly open to someone who
maintains that toothaches are necessarily objects of awareness to deny the
existence of protopains—“what you have left when you subtract your awareness
of pain” (2003b: 5). (Here the controversial item is the one whose esse is not
percipi, unlike the example of crock2s.) Sometimes there is something left when
the relational essence of a thing is subtracted, and moreover something that could
easily be mistaken for it: there could be a person who looks exactly like Queen
Elizabeth II, but born of different parents. But sometimes not: there is nothing
that could be easily mistaken for the number 7, despite not being odd, less than 8,
and so on. And Dretske’s opponent may insist that the properties of a toothache
one is aware of are all relational—specifically, they are properties whose instan-
tiation requires awareness of them. To subtract those, the opponent may say, is to
leave nothing.20

20
Sometimes relational properties are distinguished from extrinsic properties (having the same
number of angles as sides is perhaps relational but intrinsic). Dretske’s opponent might concede that
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

So, if my toothaches are necessarily objects of my awareness (and if proto-


toothaches do not exist) then there is a solution to the puzzle of transparency in
the case of sensations, and thus an answer to Dretske’s question, ‘How do you
know you are not a zombie?’ The toothache could not exist if I were not feeling it;
further, we may assume, I know this fact. Hence I may safely conclude that I am
feeling the toothache on the basis of the information supplied by my feeling it,
namely that the toothache exists.
However, this second proposal is problematic. According to Reid (quoted
earlier), “the pain I feel” and “my feeling it” are “one and the same thing.” We
may (anachronistically) suppose this identity is intended to be necessary. So on
Reid’s view the supposition that this pain exists without my feeling it is a simple
contradiction, which provides one easy route to the conclusion that my toothache
cannot exist without my feeling it. Since Reid’s view is incorrect, a proponent of
the second proposal must reach the conclusion in another way.
Probably no argument will be offered at this point. Instead, the proponent may
insist that her finely tuned metaphysical intuition is that this very disturbance
could not have existed in the absence of awareness. But even this is not enough:
the proponent must claim not just that this very disturbance is necessarily an
object of someone’s awareness, but of hers. Such a recherché claim about essence
might be the end of an argument; it is not promising as the beginning of one.
So although Dretske does not succeed in showing the second proposal to be
entirely wrongheaded, it is not especially compelling—an adequate rejoinder to it is
simply to refuse to concede that pains are necessarily objects of the subject’s
awareness. Further—and more importantly—the proposal may well be redundant.
The proposal only applies to the very special case of sensations; it has nothing
to say about the puzzle of transparency concerning belief, or in standard cases of
perception. How does one tell, from the information that there is a duck
before one, that one sees a duck, or from the information that some rocks are
sedimentary, that one believes that some rocks are sedimentary? If the puzzle of
transparency can be solved for these cases, where the relevant objects are
certainly mind-independent, one might reasonably expect that a bit of tweaking
would yield a similar solution for sensations.21

pain has a (partly) intrinsic essence E, but could still deny that there is anything with E that exists in
the absence of awareness.
21
The case of sensations highlights another respect in which Dretske’s eponymous question can
mislead. Dretske is not simply wondering how one knows one is not a zombie. He plainly would not
be satisfied if the only answer was: one knows one is not a zombie by inferring this from the fact that
sensations occur in one’s body—leaving the epistemology of belief and perception entirely
unexplained.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. KRIPKE ’ S WITTGENSTEIN ON OTHER MINDS 

So far, we have seen three explicit contemporary discussions of the puzzle


of transparency; one might predict that previous philosophers have noticed
it—albeit through a glass, darkly. The next three sections (4.6–4.8) offer some
examples.

4.6 Kripke’s Wittgenstein on other minds


The postscript to Kripke’s Wittgenstein on Rules and Private Language discusses
Wittgenstein on the problem of other minds. Familiarly, Wittgenstein sees a
difficulty in even understanding the hypothesis that other people have sensations,
given that I know what sensations are “only from my own case”:

If one has to imagine someone else’s pain on the model of one’s own, this is none too easy
a thing to do: for I have to imagine pain which I do not feel on the model of pain which
I do feel. (Wittgenstein 1958: §302)

What is Wittgenstein’s point? After mentioning Wittgenstein’s approval of


Lichtenberg’s remark that instead of ‘I think’ we should say ‘It thinks,’ Kripke
writes:

Now the basic problem in extending talk of sensations from ‘myself ’ to ‘others’ ought to
be manifest. Supposedly, if I concentrate on a particular toothache or tickle, note its
qualitative character, and abstract from particular features of time and place, I can form a
concept that will determine when a toothache or tickle comes again . . . How am
I supposed to extend this notion to the sensations of ‘others’? What is this supposed to
mean? If I see ducks in Central Park, I can imagine things which are ‘like these’—here, still
ducks—except that they are not in Central Park. I can similarly ‘abstract’ even from
essential properties of these particular ducks to entities like these but lacking the prop-
erties in question—ducks of different parentage and biological origin, ducks born in a
different century, and so on . . . But what can be meant by something ‘just like this
toothache, only it is not I, but someone else, who has it’? In what ways is this supposed
to be similar to the paradigmatic toothache on which I concentrate my attention, and in
what ways dissimilar? We are supposed to imagine another entity, similar to ‘me’—
another ‘soul,’ ‘mind’ or ‘self ’—that ‘has’ a toothache just like this toothache, except
that (he? she?) ‘has’ it, just as ‘I have’ this one. All this makes little sense, given the
Humean critique of the notion of the self that Wittgenstein accepts. I have no idea of a
‘self ’ in my own case, let alone a generic concept of a ‘self ’ that in addition to ‘me’ includes
‘others’. Nor do I have any idea of ‘having’ as a relation between such a ‘self ’ and the
toothache. (Kripke 1982: 124)

Kripke presents Wittgenstein’s problem as one of acquiring the concept of other


minds—that is, having the capacity to think about other minds as such—given
that “I have no idea of a ‘self ’ in my own case.” It is therefore quite different from
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

the traditional epistemological problem of other minds, which presupposes that


I can entertain the hypothesis that there are other minds, and that others have
sensations just like my own.
Put that way, the Kripke/Wittgenstein problem is arguably rather unexciting.
True, if one supposes that concepts of oneself and of other selves must be
somehow “abstracted” from the contents of experience, then one is likely to be
very puzzled. However, abstractionism has had a bad press for a long time.22 And
the more general empiricist idea that all concepts are somehow “acquired from
experience” is not in much better shape. But unless some strong version of
concept empiricism is in the background, it is not at all clear how to get the
Kripke/Wittgenstein problem off the ground.
Nevertheless, there is an acute problem underlying Kripke’s postscript, and it is
the puzzle of transparency for sensations. Like the traditional “problem of other
minds,” it is epistemological. Again like the traditional problem, it takes concepts
of the self and of mental states for granted. But it turns the traditional problem on
its head. The traditional problem asks how one can know about the mental lives
of others from evidence about behavior. The problem suggested by Kripke’s
postscript concerns how one can know about one’s own mental life from
evidence about events occurring in one’s own body. Here is the sensation,
throbbing away nicely in my tooth—the toothache. The evidence is that—to
borrow a phrase from Wittgenstein—“there is toothache.” How do I get from that
to a conclusion about my psychology, namely that I am feeling a toothache?

4.7 Hume on the self


As just emphasized, the puzzle of transparency is not a puzzle about the self. It is
accordingly quite different from Lichtenberg’s complaint against the cogito,
namely that Descartes is not entitled to the judgment I think, but only to It thinks.
Rather, the puzzle of transparency is that Descartes is (apparently) not entitled to I
think I am sitting before the fire, but only to I am sitting before the fire.23

22
For a classic critique, see Geach 1957.
23
Evans does think that transparency leads to a puzzle, but it concerns the self, and so is not the
puzzle of transparency:
For what we are aware of, when we know that we see a tree, is nothing but a tree. In
fact, we only have to be aware of some state of the world in order to be in a position to
make an assertion about ourselves.
Now this might raise the following perplexity. How can it be that we can have
knowledge of a state of affairs which involves a substantial and persisting self, simply
by being aware of (still worse, by merely appearing to be aware of) a state of the world?
(Evans 1982: 231)
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. HUME ON THE SELF 

Still, there is a close connection between the puzzle and Hume’s remarks about
the self in the Treatise (I.iv.6) (briefly mentioned in section 2.2.1); in particular,
his claim that
when I enter most intimately into what I call myself, I always stumble on some particular
perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never
can catch myself at any time without a perception, and never can observe any thing but the
perception. When my perceptions are remov’d for any time, as by sound sleep; so long am
I insensible of myself, and may truly be said not to exist. And were all my perceptions
remov’d by death, and cou’d I neither think, nor feel, nor see, nor love, nor hate after the
dissolution of my body, I shou’d be entirely annihilated, nor do I conceive what is farther
requisite to make me a perfect non-entity. If any one, upon serious and unprejudic’d
reflection thinks he has a different notion of himself, I must confess I can reason no longer
with him. All I can allow him is, that he may be in the right as well as I, and that we are
essentially different in this particular. He may, perhaps, perceive something simple and
continu’d, which he calls himself; tho’ I am certain there is no such principle in me.
But setting aside some metaphysicians of this kind, I may venture to affirm of the rest
of mankind, that they are nothing but a bundle or collection of different perceptions.
(Hume 1740/1978: 252)

For Hume, the data from which all theorizing must start concern “perceptions,”
which he divides into “impressions” and “ideas,” the latter having less “vivacity”
than the latter (2). Impressions (more exactly, “impressions of sensation”) bear a
strong resemblance to the sense data of the twentieth century; the less vivacious
ideas are mental images. Thus when Hume looks at a duck and tries without
prejudice to describe what he is aware of, he will not mention the duck, but
rather a certain perception “aris[ing] in the soul originally, from unknown
causes” (7), namely an impression of a duck. Although this impression “may
exist separately” (252), it is contingently mind-dependent, because it is supposed
to be “remov’d . . . by sound sleep” or when Hume shuts his eyes.
The puzzle of transparency is nearby, but Hume’s mentalized conception of
the items of awareness obscures it, for two reasons. The first is basically termino-
logical. In English (including Hume’s eighteenth-century version), to perceive a
duck is, inter alia, to be aware of a duck. So a Humean “perception” of a duck
sounds like a kind of awareness of a duck. And, according to Hume, one is aware
of a “perception” when one looks at a duck. And if one is aware of one’s
awareness of the duck, Dretske’s question “What is it that you are aware of that
indicates that you are aware of it?” (see section 4.4) has an easy answer.

“The anxiety will be lessened,” he says, “only by showing where the accounting is done—where the
idea of the persisting empirical self comes from” (231). And then he tries to show where the idea
comes from, the details of which need not concern us here.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

However, this is a misinterpretation, and the blame can be laid with Hume’s
misleading terminology. A sense datum is an object of awareness, and is not itself
an awareness; likewise for Hume’s “perceptions.” As Hume says (2, fn. 1),
“perceptions” are Locke’s “ideas,” and for Locke ideas are the objects we imme-
diately perceive, and are not themselves perceivings.
The second reason is more substantive. Although Humean perceptions are not
themselves awarenesses, Hume is quite confident that they are dependent on
awareness for their existence. In particular, this impression of a duck would not
be around if I were not aware of it, which (as we saw) again suggests an easy
answer to Dretske’s question.
In any event, Hume certainly sees no difficulty in discovering that he is aware
of a perception, and so cannot quite be credited with discovering the puzzle of
transparency. But the puzzle immediately pops out once Hume’s mythical
impressions are jettisoned in favor of familiar material objects like ducks. Since
the duck would be around if I were not aware of it, there should be no temptation
to dismiss Dretske’s question as trivial. And once the “uninterrupted existence”
(253) of the duck is granted, there is little reason to find the uninterrupted
existence of oneself mysterious—let alone to suppose that one is “a bundle or
collection” of material objects like ducks! There (looking straight ahead) is the
duck, and here (sensing the position of my limbs, the temperature of my
extremities, seeing part of my nose and hands, and so forth) am I, a certain
human—albeit not Humean—animal. The existence of that human animal and
the duck is unproblematic—what is problematic is how I know that the former is
aware of the latter.

4.8 Introspective psychology


Experimental psychology began in the late nineteenth century, and introspection
was a major part of its methodology until behaviorism began its ascent early in
the next century. Introspection was thought of as inner sense; for instance,
E. B. Titchener, one of the leading figures in the introspectionist movement,
characterizes it as

[the] observation by each man of his own experience, of mental processes which lie open
to him but to no one else. (Titchener 1899: 27)

A properly scientific astronomy requires practiced observers of the heavens;


similarly, Titchener thought, a properly scientific psychology requires practiced
observers of the mind. Titchener accordingly gave his experimental subjects
extensive training in introspection, involving many thousands of trials.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. INTROSPECTIVE PSYCHOLOGY 

Is this really introspective training? That should seem quite doubtful: recall
Moore’s observation (quoted in section 1.2) that “[w]hen we try to introspect the
sensation of blue, all we can see is the blue” (1903: 450). The psychologist Knight
Dunlap made essentially the same point a few years later, in a classic paper, “The
case against introspection,” criticizing introspective psychology:
I may observe, or be aware of, a color, an odor, or any other sensation (sense datum);
I may be aware of relations and feelings; I may be aware of any combination of these;
but . . . I am never aware of an awareness. (Dunlap 1912: 410)

(Note that Dunlap is using ‘sensation’ in a different way from Moore. In Dunlap’s
terminology, the color blue is an example of a sensation. A Moorean “sensation of
blue” is what Dunlap would call ‘an awareness of blue.’)
Indeed, Titchener’s methods illustrate the elusiveness of the mental. As in any
psychophysics lab, the “introspectors” were attending to (apparent) external stimuli
and their (apparent) properties—sounds, color boundaries, moving spots, and so
forth: they were extrospecting, not introspecting.24, 25
Fine, there’s no such thing as introspection, at least as Titchener conceived of
it. But, Dunlap realized, this might be thought problematic:
The possible objection to the statement just made, and probably . . . the logical foundation
of the ‘introspection’-hypothesis, is as follows: If one is not aware of awareness, he does
not know that it exists. If one denies that he is ever aware of a thing, and that any one else is
ever aware of it, he has no right to say that there is such a thing. (Dunlap 1912: 410–11)

Dunlap himself was not impressed with this “possible objection,” immediately
declaring that “[t]he force of this argument is purely imaginary.” But Russell was:

24
Dunlap, incidentally, had an interesting diagnosis of Titchener’s error:
When one observes some ‘external’ object, as for instance sound, there are simultan-
eously present a number of other objects which are intimately connected with the
observing of the sound . . . muscular sensations from the tympanum, neck, breast, and
other regions; the visual ‘images’; the feelings; the visceral sensations . . . the attention
may be turned to these accessory facts, and the importance of the auditory sensation
may be secondary. In this case, there seems to be a turning of the attention from the
‘outer’ fact (the sound) to the ‘inner’ facts. These facts are ‘inner’ in that they concern,
or are constituents, of the body, or objective self. By a rather natural step, accordingly,
these inner facts are taken to be the process of observing the sound. Observation of
them is therefore the process of observing the process of observing the sound—
introspection. (Dunlap 1912: 411)
25
See Schwitzgebel 2004: 64–5. After pointing out that the “introspectors” are, in one kind of
training procedure, attending to tones, Schwitzgebel then says that “Titchener’s procedure qualifies
as introspective training . . . not because reporting such tones is necessarily an introspective act but
because for the person antecedently interested in introspectively attending to her own auditory
experience, the training provides a way of identifying and labeling one aspect of it” (65). In effect,
Schwitzgebel is here assuming that the puzzle of transparency has a “straight solution.”
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 THE PUZZLE OF TRANSPARENCY

[T]he paradox cannot be so lightly disposed of. The statement “I am aware of a colour” is
assumed by Knight Dunlap to be known to be true, but he does not explain how it comes
to be known. The argument against him is not conclusive since he may be able to show
some valid way of inferring our awareness. But he does not suggest any such way. There is
nothing odd in the hypothesis of beings which are aware of objects, but not of their own
awareness; it is, indeed, highly probable that young children and the higher animals are
such beings. But such beings cannot make the statement “I am aware of a colour,” which
we can make. We have, therefore, some knowledge which they lack. It is necessary to
Knight Dunlap’s position to maintain that this additional knowledge is purely inferential,
but he makes no attempt to show how the inference is possible. It may, of course, be
possible, but I cannot see how. To my mind the fact (which he admits) that we know there
is awareness, is all but decisive against his theory, and in favour of the view that we can
be aware of an awareness. (Russell 1921/95: 93–4)

Suppose Dunlap is looking at a blue patch. According to Dunlap, he is aware of


the blue patch, and the color blue, but he is not aware of his awareness of blue, or
of his awareness of the patch. So, Russell rightly points out, Dunlap has to “show
some valid way of inferring” that he is aware of blue. Although Russell doesn’t
explicitly say so, by Dunlap’s lights the only available premises concern the patch:
it’s blue, in front of me, and the like. Given these premises, it is hard to see, as
Russell says, how “this additional knowledge is purely inferential.” That is the
puzzle of transparency.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

5
Belief

That he believes such-and-such, we gather from observation of his person,


but he does not make the statement ‘I believe . . . ’ on grounds of observation
of himself. And that is why ‘I believe p’ may be equivalent to the assertion
of ‘p’.
Wittgenstein, Remarks on the Philosophy of Psychology

5.1 Introduction
First, a brief recap is in order. Chapter 1 introduced the phenomena of privileged
and peculiar access, and pointed out their independence. Chapter 2 argued
that the inner-sense theory can face down its chief critics. However, although it
promises an account of peculiar access, the theory has some attendant puzzles—
for example, it is unclear how the inner-sense theory could explain privileged
access. Chapter 3 canvassed three recent approaches to self-knowledge, each
advertised as rejecting any kind of inner sense, and found them all wanting.
Chapter 4 examined the puzzle of transparency.
This book will eventually offer a unified and economical theory of self-
knowledge that explains both privileged and peculiar access. The first step toward
doing that is to solve the puzzle of transparency for belief (section 5.2). That
solution will lead to an economical theory of knowledge of what we believe and
know that explains both privileged and peculiar access (section 5.3); sections
5.4–5.6 extend the account and consider some objections.

5.2 The puzzle of transparency revisited


The puzzle of transparency for belief is this: how can one come to know that one
believes that p by inference from the premise that p? The relevant transparent
pattern of reasoning can be illustrated with this instance of Gallois’s doxastic
schema:
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

Argument B
Evans wrote The Varieties of Reference.
I believe that Evans wrote The Varieties of Reference.
Depending on where the emphasis is placed, the puzzle may take three slightly
different forms. First, the emphasis might concern reliability: how can reasoning
in accord with the pattern of Argument B yield reliably true beliefs? After all,
there are countless propositions that I don’t believe, and so countless arguments
of the same pattern with a false conclusion. Evans would still have written
Varieties even if I hadn’t believed he wrote Varieties, so the fact that he did
write Varieties is hardly a reliable indication that I believe that he did.
Second, the emphasis might concern evidence: that Evans wrote The Varieties
of Reference is, by itself, inadequate evidence for the hypothesis that I believe
that Evans wrote The Varieties of Reference; yet it is my total relevant evidence,
or so we may suppose. But how can knowledge be based on inadequate (total)
evidence? To repeat Moran’s question from section 4.3: “how can a question
referring to a matter of empirical psychological fact about a particular person be
legitimately answered without appeal to the evidence about that person, but
rather by appeal to a quite independent body of evidence?” (Moran 2001: 413).
Third, the emphasis might concern reasoning through a false step. If the
transparency procedure can yield knowledge at all, presumably it can yield
knowledge in the special case where the premise is false. By following the
transparency procedure, I can come to know that I believe that Dretske wrote
Authority and Estrangement, say. I have reasoned to this true conclusion by
appeal to the false premise that Dretske wrote Authority and Estrangement. And
how can knowledge be based on reasoning through a false step? (See section 4.1.)
These three variants of the first puzzle will be addressed in order. So far, we
have been putting the puzzle in Gallois’s way, in terms of arguments. It will help
to recast the puzzle in terms of following a rule, specifically an epistemic rule. This
apparatus of epistemic rules needs to be explained first.

5.2.1 Epistemic rules and BEL


The psychological process of reasoning (or inferring) can extend one’s knowledge.
Initially Holmes knows that Mr. Orange has been shot, that Mr. Pink has an
alibi, that Mr. White’s fingerprints are on the gun, and so on; reasoning from
this evidence may result in Holmes knowing that Mr. White committed the murder.1

1
An assumption of this book is that the pertinent kind of reasoning is relatively undemanding: it
can occur without self-knowledge, and without an appreciation of one’s evidence or reasons as
evidence or reasons. It is thus not “critical reasoning” in the sense of Burge 1996: “reasoning guided
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. THE PUZZLE OF TRANSPARENCY REVISITED 

Holmes’s reasoning to the conclusion that Mr. White killed Mr. Orange is
complex, and his methods resist easy summary. Presumably Holmes’s reasoning
is in some sense rule-governed, but it is not clear how to identify the rules. On the
other hand, some reasoning is considerably simpler. For example, Mrs. Hudson
might hear the doorbell ring, and conclude that there is someone at the door.
By perception, specifically by hearing the doorbell ring, Mrs. Hudson knows
that the doorbell is ringing; by reasoning, she knows that there is someone at
the door.
It is natural to say that Mrs. Hudson acquires knowledge of her visitors by
following a simple recipe or rule. If we say that an epistemic rule is a conditional
of the following form:
R If conditions C obtain, believe that p,
then the epistemic rule that Mrs. Hudson follows is:
DOORBELL If the doorbell rings, believe that there is someone at the door.
Since judging is the act that results in the state of belief, perhaps the consequent is
better put as ‘judge that p.’2 This is simply a stylistic or presentational issue,
however. The linguistic formulation of the rule only plays a heuristic role—all the
work is done by the account of following a rule.
So, what does it mean to say that Mrs. Hudson follows this rule on a particular
occasion? Let us stipulate, not unnaturally, that she follows the rule just in case
she believes that there is someone at the door because she recognizes that the
doorbell is ringing. The ‘because’ is intended to mark the kind of reason-giving
causal connection that is often discussed under the rubric of ‘the basing relation.’
Mrs. Hudson might recognize that the doorbell is ringing, and believe that there
is someone at the door for some other reason: in this case, she does not form her
belief because she recognizes that the doorbell is ringing.
So S follows the rule R (‘If conditions C obtain, believe that p’) on a particular
occasion iff on that occasion:
(i) S believes that p because she recognizes that conditions C obtain;
which implies:
(ii) S recognizes (hence knows) that conditions C obtain;

by an appreciation, use, and assessment of reasons and reasoning as such” (73). Sometimes
‘reasoning’ is used much more restrictedly for something akin to critical reasoning, for instance in
Mercier and Sperber 2011: 57.
2
Cf. Byrne 2005: 102, n. 22, Shoemaker 2011: 240. See also fn. 22 below.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

(iii) conditions C obtain;


(iv) S believes that p.
There should be no temptation to think that rule-following is self-intimating:
one may follow a rule without realizing that this is what one is doing. Indeed,
presumably many non-human animals are permanently in this predicament.3
Following DOORBELL tends to produce knowledge about one’s visitors (or so we
may suppose), and is therefore a good rule. Following bad rules tends to produce
beliefs that do not amount to knowledge; an example of such a rule is:
NEWS If the Weekly World News reports that p, believe that p.
NEWS is also an example of a schematic rule. One follows a schematic rule just in
case one follows a rule that is an instance of the schematic rule; a schematic rule is
good to the extent that its instances are.
If the antecedent conditions C of an epistemic rule R do not require evidence
about the rule-follower’s mental states in order to be known, R is neutral.
A schematic rule is neutral just in case some of its instances are. Thus, the
claim that S can follow a neutral rule does not presuppose that S has the capacity
for self-knowledge. DOORBELL and NEWS are neutral rules; ‘If you intend to go
swimming, believe that you will get wet’ is not.4
Self-knowledge is our topic, not skepticism: knowledge of one’s environment
(including others’ actions and mental states) and reasoning (specifically, rule-
following of the kind just sketched) can be taken for granted. So, in the present
context, it is not in dispute that we follow neutral rules, including neutral rules
with mentalistic fillings for ‘p,’ like ‘If S has a rash, believe that S feels itchy’;
neither is it in dispute that some neutral rules are good rules.
Let us concentrate for the moment on applications of the transparency pro-
cedure where one knows the premise that p: for example, one knows that it’s
raining and thereby believes that one believes that it’s raining. (This restriction
will be relaxed when the third variant of the puzzle of transparency is addressed,
in section 5.2.5.) Then the Evans-inspired suggestion to be defended can be put
using the apparatus of epistemic rules as follows. Knowledge of one’s beliefs may
be (and typically is) obtained by following the neutral schematic rule:

BEL If p, believe that you believe that p.

3
According to Moran, “Following a rule for belief . . . requires . . . some understanding of, and an
endorsement of, the rational connection between the contents mentioned in the rule” (2011: 227); at
least in the stipulated sense of ‘rule-following’ relevant here, it does not.
4
‘You’ refers to the rule-follower; tenses are to be interpreted so that the time the rule is followed
counts as the present.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. THE PUZZLE OF TRANSPARENCY REVISITED 

5.2.2 Evans
Evans’ statement of his own view has its shortcomings. Summarizing the trans-
parency procedure, he writes:

We can encapsulate this procedure for answering questions about what one believes in the
following simple rule: whenever you are in a position to assert that p, you are ipso facto in
a position to assert ‘I believe that p’. (Evans 1982: 225–6)

One minor problem is that Evans’ talk of assertion gives the transparency proced-
ure an unwanted linguistic cast: the procedure should simply deliver knowledge,
whether or not the subject has the linguistic resources to express it. But the main
problem is that on a natural interpretation of what counts as following Evans’
“simple rule,” it presupposes what it is supposed to explain. On that natural
interpretation, one follows the rule just in case one verifies that one is in a position
to (permissibly) assert that p, and thereby concludes that one is in a position to
assert that one believes that p. Am I in a position to assert that Evans wrote The
Varieties of Reference? Glancing outward to the bookshelf, I note that Evans did, in
fact, write The Varieties of Reference. Fine, but that fact by itself does not put me
in a position to assert it: a benighted hermit, ignorant of the contemporary
philosophical canon, has no business asserting that Evans wrote Varieties. To be
in a position to (permissibly) assert that Evans wrote Varieties, plausibly I must
know that Evans wrote Varieties (see section 2.2.5); even more plausibly, I must
believe that Evans wrote Varieties. So, do I believe that Evans wrote Varieties?
Answering this is a precondition of following Evans’ rule, rather than an outcome
of it. In the terminology of section 5.2.1, Evans’ rule is not neutral.

5.2.3 First variant: reliability


Let us state the puzzle of transparency in the terminology of epistemic rules.
Knowledge of what one believes is often apparently the result of following the
neutral schematic rule BEL:
BEL If p, believe that you believe that p.
Yet surely this is a bad rule; in other words, following BEL tends to produce false
and unjustified beliefs. Putting it in terms of the first (reliability) variant of the
puzzle of transparency: that p is the case does not even make it likely that one
believes that it is the case.
Consider an analogous problem that could be raised for the “rule of necessita-
tion” in modal logic. According to this rule, if a sentence ‘p’ is a line of a proof,
one may write down the necessitation of ‘p,’ ‘□p,’ as a subsequent line. Artificially
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

forcing this into a format similar to that of “epistemic rules,” the rule of
necessitation becomes:
NEC If ‘p’ is a line, you may write ‘□p’as a subsequent line.
NEC, it seems, does not preserve truth, and so—in an extended sense—is a “bad”
rule. It doesn’t follow from the fact that the cat is indoors that necessarily the cat
is indoors. The cat’s being indoors doesn’t even make it likely that this state of
affairs could not have been otherwise.
But, of course, the rule of necessitation is not a bad rule. In fact, it’s a
necessarily truth-preserving rule. The reason is that—assuming that the only
initial premises of a proof are axioms—whenever one is in a position to follow the
rule by writing down ‘□p,’ ‘p’ is a necessary truth. The axioms of a system of
modal logic are themselves necessary truths, and whatever follows from them by
the other rules are also necessary truths. So whenever one is in circumstances in
which the rule applies—whenever, that is, one is confronted with a proof whose
initial premises are axioms—every line of the proof is a necessary truth. If the
allowable substituends for ‘p’ include sentences about the location of cats, then
the rule of necessitation is a bad rule. But if (as intended) it is kept within the
confines of modal logic, the rule is perfectly good.5
Something similar holds for BEL. One is only in a position to follow BEL by
believing that one believes that p when one has recognized that p. And recogniz-
ing that p is (inter alia) coming to believe that p. BEL is self-verifying in this sense
(with a qualification to be immediately noted): if it is followed, then the resulting
second-order belief is true. The qualification: it is possible that one’s belief that
p could vanish once one has inferred that one believes that one believes that p, in
which case one’s second-order belief will be false. But although changing one’s
mind in the middle of a long chain of reasoning is a real phenomenon, the chain
of reasoning in this case is so short that the qualification doesn’t amount to much,
and can safely be ignored.6, 7

5
More exactly, simple modal logic that appears in introductory texts. If indexicals like ‘I’ and
‘actually’ are introduced into the language, allowing certain contingencies to be proved (e.g. ‘I exist,’
‘p iff actually p’), the rule of necessitation does not preserve truth. See Kaplan 1989: 509, 538–40.
6
Accordingly this qualification will often be suppressed in what follows.
7
The analogy between the rule of necessitation and BEL can be pushed too far, but there is
another instructive similarity. It is a mistake to think that the rule of necessitation is equivalent to the
(invalid) axiom schema ‘p  □p’ plus modus ponens. (Relatedly, see Humberstone 2009 on Smiley’s
distinction between “rules of proof” and “rules of inference”: the rule of necessitation is a Smileyan
rule of proof but not—unlike modus ponens—a rule of inference.) Likewise, it is a mistake to think
of following BEL as equivalent to (falsely) assuming that for all P, if P is true then one believes P,
which would make one’s reasoning from the premise that it’s raining to the conclusion that one
believes that it’s raining demonstrative. (See also the discussion of Valaris in fn. 22 below.)
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. THE PUZZLE OF TRANSPARENCY REVISITED 

Compare:
DOORBELL If the doorbell rings, believe that there is someone at the door.
This rule is not self-verifying: one may recognize that the doorbell is ringing, and
thereby falsely conclude that someone is at the door.
Comparison with a third-person version of BEL is also instructive:
BEL-3 If p, believe that Fred believes that p.
For someone who is not Fred, the result of following BEL-3 may be (indeed, is very
likely to be) a false belief about Fred’s beliefs: far from being self-verifying, BEL-3
is a bad rule. The seductiveness of the puzzle of transparency, in its reliability
variant, essentially trades on ignoring the crucial distinction between BEL-3
and BEL.
Hence worries that the transparency procedure is hopelessly unreliable could
hardly be more misplaced. Following BEL guarantees (near enough) that one’s
beliefs about one’s beliefs are true.
5.2.4 Second variant: inadequate evidence
Suppose one follows BEL and thereby comes to believe that one believes that it’s
raining. One has reasoned from the fact that it’s raining to the conclusion that
one believes that it’s raining. Since BEL is self-verifying, that conclusion is true.
Following customary practice, let us say that item of evidence P is evidence
for Q iff the probability of Q conditional on P is greater than the probability
of Q. Equivalently, P is evidence for Q iff the probability of Q conditional on P is
greater than the probability of Q conditional on not-P.8 Is it more likely that one
believes that it’s raining, given that it’s raining, than one believes that it’s raining,
given that it isn’t raining? In this particular case, yes: rain tends to make its
presence felt. So one has evidence for the hypothesis that one believes that it’s
raining: the proposition that it’s raining raises the chance that one believes that
it’s raining. However, it undeniably doesn’t raise it by much: that it’s raining is
weak or inadequate evidence for the hypothesis that one believes that it’s raining.
And typically when one reaches a true conclusion on the basis of weak evidence,
one does not know the conclusion. So—even granted that BEL is reliable—how
can it produce knowledge?

8
Prob(Q|P) > Prob(Q|~P) iff Prob (Q&P)/Prob(P) > Prob(Q&~P)/Prob(~P) (by the usual
definition of conditional probability, assuming Prob (P) > 0). The right-hand side of this bicondi-
tional is equivalent to: Prob (Q&P)/Prob(P) > Prob(Q) - Prob(Q&P)/1-Prob(P); which is equivalent
to: Prob (Q&P)/Prob(P) - Prob(Q&P) > Prob(Q) - Prob(Q&P); which is equivalent to: Prob (Q/P) >
Prob (Q).
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

Consider a representative example of reaching a true but unknown conclusion


on the basis of inadequate evidence. One hears Jones proclaiming that Fords are
excellent cars. Given one’s background evidence, the fact that Jones made that
announcement is weak evidence for the hypothesis that Jones owns a Ford.
It suggests that the hypothesis, if of interest, should be explored further, but the
evidence does not warrant outright belief. Still, one blunders on and concludes
that Jones owns a Ford. In fact, Jones does own a Ford. One truly believes, but
does not know, that Jones owns a Ford.
It would be a mistake immediately to draw the moral that a necessary
condition for a belief to amount to knowledge is that it not be inferred from
inadequate evidence. In the example, knowledge is absent and a certain condition
(inference from inadequate evidence) is present. It is obviously consistent with
this that there are other cases where both knowledge and the condition are
present. Further, there is another appealing diagnosis of why one fails to know
that Jones owns a Ford, namely that one’s belief could easily have been false. That
is, in the terminology of Sosa 1999 and Williamson 2000: ch. 5, one’s belief is not
safe. It could easily have happened, say, that Jones praised Fords because he was
carless and wanted a shiny new Mustang. Since—as will be argued in some detail
in section 5.3.2—following BEL does produce safe beliefs, this diagnosis does not
threaten the idea that BEL is a good rule.
Some knowledge is non-inferential, otherwise an implausible regress threatens.
Sometimes one knows P, but did not infer P from anything, still less from any
evidence. Perceptual knowledge provides an obvious example: on the face of it,
one can come to know by vision that there is a blue triangle ahead without
inference, and so not on the basis of any evidence. If one’s self-knowledge is the
result of following BEL, then one’s knowledge is inferential; in that respect it is
unlike basic perceptual knowledge. Its status as knowledge, however, is not
explained in terms of inference from evidence; in that respect, it is like basic
perceptual knowledge. The present variant of the puzzle of transparency essen-
tially draws attention to this fact. If one knows that one believes that it’s raining
by following BEL, then one’s evidence—the fact that it’s raining—does not explain
why one’s second-order belief amounts to knowledge. This is no reason for
concluding that one does not know that one believes that it’s raining, because
on independent grounds it is plausible that some knowledge is not explained in
terms of inference from evidence.
5.2.5 Third variant: reasoning through a false step
Suppose, as before, that one follows BEL and thereby comes to believe that one
believes that it’s raining. Since one has followed BEL, one has realized (hence,
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. THE PUZZLE OF TRANSPARENCY REVISITED 

come to know) that it’s raining, and thereby concluded that one believes that it’s
raining. One reasons from the fact that it’s raining, and thus not through a false
step. The transparency procedure is therefore only partially characterized by
following BEL, because the procedure is supposed to apply when one directs
one’s eye outward and gets the layout of the external world wrong. To cover all
the cases, more terminology is needed.
Say that S tries to follow rule R iff S believes that p because S believes that
conditions C obtain. That S follows R entails that she tries to follow R, but not
conversely. Sometimes one will try to follow BEL without actually following it. For
instance, suppose it isn’t raining. One sees the cat come in soaking wet, falsely
concludes that it’s raining, and then tries to follow BEL, concluding that one
believes that one believes that it’s raining.
BEL is self-verifying because, if one follows it, one knows that p, and hence one
believes that p. (With the qualification that one could in principle change one’s
mind mid-inference, noted in section 5.2.3.) So one’s second-order belief is true.
For similar reasons, BEL is also strongly self-verifying: if one tries to follow it, one’s
second-order belief is true. (Again, with the qualification just mentioned.) If
one tries to follow BEL, one may not know that p, but one does believe that p.
Hence there is no reliability version of the first puzzle that trades on the
difference between following BEL and merely trying to follow it. The reliability
worry is similarly assuaged in both cases: whether one follows or merely tries to
follow BEL, the truth of one’s second-order belief is (near-enough) guaranteed.
With this terminology in hand, the third variant of the first puzzle is this.
When one merely tries to follow BEL, one reasons through a false step. Because
BEL is strongly self-verifying, one’s second-order belief is true. However, typically
when one reaches a true conclusion by reasoning through a false step, one does
not know the conclusion. So how can merely trying to follow BEL produce
knowledge?
The reply to this third variant of the puzzle is basically the same as the reply to
the second. Consider one of Gettier’s original examples of a true (justified) belief
reached through reasoning through a false step. One sees Jones get out of a Ford.
In fact, this is a rented car, but one has good evidence that this is the car he owns,
and concludes (rightly, as it happens) that Jones owns a Ford.
As in the previous example of inference from inadequate evidence, it would be a
mistake immediately to draw the moral that a necessary condition for a belief to
amount to knowledge is that it not be reached through a false step. And, again as in
the previous example, there is another appealing diagnosis of why one fails to know
that Jones owns a Ford, namely that one’s belief could easily have been false. It
could easily have happened, say, that Jones owned a Toyota, despite renting a Ford.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

Since trying to follow BEL produces safe beliefs (see section 5.3.1), this diagnosis
does not threaten the idea that trying to follow BEL produces knowledge.9
If one’s self-knowledge is the result of trying to follow BEL, then one’s know-
ledge is inferential; in that respect it is unlike basic perceptual knowledge. Its
status as knowledge, however, is not explained in terms of inference from
evidence; in that respect, it is like basic perceptual knowledge. And the two
respects are not in conflict—indeed, the self-verifying nature of the inference at
least partly explains why it yields knowledge that is not based on evidence.

5.3 Peculiar and privileged access explained


The argument so far has been largely defensive. The puzzle of transparency
threatens to undermine the plausible hypothesis that one’s typical route to
knowledge of what one believes is to try to follow BEL; henceforth, the transpar-
ency account.10 Once the puzzle (in its three variants) is defused, there is no
evident obstacle to taking BEL to be a good rule, and so no evident obstacle to
accepting the transparency account. (More obstacles will be circumvented at the
end of this chapter.)
Assume, then, the transparency account is correct. Does it explain peculiar and
privileged access?
5.3.1 Peculiar access
Recall the explanation of peculiar access offered by the inner-sense theory
(section 2.3). The mechanism of inner sense is sealed in one’s own body and
has no transducers that respond to the external environment; it is therefore quite
ill-suited to detect others’ mental states. (A similar explanation can be given of
the peculiar access we have to the position of our own limbs.)
The explanation offered by BEL might seem to be quite different. Recall the
third-person version of BEL (section 5.2.3):
BEL-3 If p, believe that Fred believes that p.
This rule is no good at all: assuming you are not Fred, it would be most unwise to
conclude from the premise that you are wearing odd socks, say, that Fred believes

9
This reply can be strengthened by noting that there are other examples, quite different from
BEL-style reasoning, where knowledge can seemingly be obtained by inference from a false premise:
see, in particular, Warfield 2005. (Since these cases also count as knowledge by inference from
inadequate evidence, they also bolster the reply to the second variant.)
10
‘The transparency account’ will gradually expand in extension as more mental states are
argued to yield to a transparency-style inference.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. PECULIAR AND PRIVILEGED ACCESS EXPLAINED 

that you are. The transparency method clearly only works in one’s own case, so
peculiar access drops out as an immediate consequence.
But why does it only work in one’s own case? When you conclude that you
believe that p from the premise that p, there is a causal transition between two
states you are in: believing that p, and believing that you believe that p. The second
belief state is true if you are in the first state. And since the transparent inference
guarantees you are in the first state,11 this method is a highly reliable way of
forming true beliefs. In the terminology of section 5.2.5, BEL is strongly self-
verifying. Suppose now you conclude that Fred believes that p from the premise
that p, thus effecting a causal transition between believing that p and believing
that Fred believes that p. Science fiction aside, your belief about Fred is responsive
to what you believe, not to what Fred believes. Inference is a causal process
involving a single subject’s mental states, which is why the transparency proced-
ure is quite ill-suited to detect others’ mental states. The explanations of peculiar
access offered by the inner-sense theory and the transparency account are thus
fundamentally the same.

5.3.2 Privileged access


What about privileged access? At a minimum, we need to show that BEL is
significantly better—more knowledge-conducive—than rules whose consequents
concern others’ mental states.
Recall:
DOORBELL If the doorbell rings, believe that there is someone at the door.
Following this rule (we may suppose) will deliver beliefs that are about as likely to
amount to knowledge as our beliefs about others’ mental states. So for simplicity
DOORBELL can go proxy for a good rule whose consequent concerns others’ mental
states. In what ways is BEL better than DOORBELL?
One immediate advantage of BEL over DOORBELL is that the former but not the
latter is self-verifying. Suppose one follows DOORBELL, and so knows that the
doorbell is ringing and believes that there is someone at the door. One’s belief
that there is someone at the door is probably true, but it may be false. Suppose
one also follows BEL: in particular, one recognizes that the doorbell is ringing and
thereby believes that one believes that the doorbell is ringing. Because BEL is self-
verifying, the truth of one’s second-order belief is guaranteed.
Suppose there is someone at the door, and so the belief produced by following
DOORBELL is true—how likely is it to be knowledge? The notion of safety

11
With the qualification noted in section 5.2.3.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

(mentioned in section 5.2.4) is useful here. To a first approximation, one’s belief


that p is safe just in case one’s belief could not easily have been false.12 Safety is a
plausible necessary condition for knowledge. Further, provided it is emphasized
that the relevant sense of ‘could not easily have been false’ cannot be elucidated in
knowledge-free terms, there is no obvious reason to suppose that it is not also
sufficient.13 If we take the relevant sense of ‘could not easily have been false’ to be
illustrated by paradigm examples of knowledge, we can use safety as a rough-and-
ready diagnostic tool for the presence of knowledge. Could one easily have been
wrong about the presence of a visitor? The ways in which one could have falsely
believed that there is someone at the door can be classified into three types:
Type I: not-p, and one falsely believes that conditions C obtain, thereby
believing that p. Perhaps the sound made by a passing ice cream truck might
have been mistaken for the ringing of the doorbell, leading to the false belief
that there is someone at the door.
Type II: not-p, and one truly believes that conditions C obtain, thereby
believing that p. Perhaps a wiring defect might have caused the doorbell to
ring, leading to the false belief that there is someone at the door.
Type III: not-p, and one believes that p, but not because one knows or believes
that conditions C obtain. Perhaps too much coffee might have led one to
believe that there is someone at the door, even if the stoop had been deserted.
By hypothesis, there is someone at the door. Also by hypothesis, one follows
DOORBELL, which entails one knows that the doorbell is ringing. Hence one could
not easily have been wrong about that, and so Type I errors are remote possibil-
ities. And, given certain assumptions that will obtain in many realistic cases
(the doorbell has no wiring defects, the coffee is not that psychoactive, etc.), Type
II and Type III errors are also remote possibilities and could not easily have
happened. However, in other realistic cases these errors are nearby possibilities,
and hence one’s true belief that there is someone at the door will not be knowledge.
Consider now the belief that one believes that there is someone at the
door; could one easily have been wrong? It is not possible to make a Type
I error: one cannot falsely believe that the doorbell is ringing without believing
that the doorbell is ringing. Type II errors are likewise ruled out: one cannot truly
believe that the doorbell is ringing without believing that the doorbell is ringing.

12
This is one of Williamson’s formulations. For simplicity, situations that could easily have
obtained in which one does not falsely believe that p but rather falsely believes something else will be
ignored. See Williamson 2000: 101–2, and also the discussion of Smithies and Stoljar in fn. 22 below.
13
See Williamson 2009a: 9–10, and 2009b: 305–6.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. PECULIAR AND PRIVILEGED ACCESS EXPLAINED 

If one follows BEL, only Type III errors are a threat to one’s knowledge: perhaps
too much coffee would have led one to believe that one believes that the doorbell
is ringing, even if one had not believed that the doorbell is ringing. With the
modest assumption that Type III errors are no more likely when following BEL as
when following DOORBELL, the true beliefs produced by following BEL are more
likely to amount to knowledge than the true beliefs produced by following
DOORBELL.
Sometimes one will not succeed in following DOORBELL because one believes but
does not know that the doorbell is ringing (maybe a passing ice cream truck induces
a false belief). Recall that S tries to follow rule R iff S believes that p because S believes
that conditions C obtain. That S follows R entails that she tries to follow R, but not
conversely. If one tries to follow DOORBELL but does not succeed, then one will not
know that there is someone at the door; if one’s belief about a visitor is true, that is
just an accident. The visitor could have easily been delayed, with the truck passing as
it actually did, in which case one would have falsely believed that there is someone at
the door. That is, a Type I error is a nearby possibility.
Sometimes one will not succeed in following BEL either: one will merely try to
follow it, and believe but not know that the doorbell is ringing. But one’s second-
order belief that one believes that the doorbell is ringing will be true. As before,
Type I and II errors are not possible. Hence this situation will be commonplace:
trying to follow BEL, one investigates whether p, mistakenly concludes that p, and
thereby comes to know that one believes that p. (In these cases, one will know that
one believes that p on the basis of no evidence at all.)
BEL, then, has considerable epistemic virtues, but it is important not to
overstate them. Consider the following quotation from Evans (continuing the
quotation given in section 5.2.2):
I get myself in a position to answer the question whether I believe that p by putting into
operation whatever procedure I have for answering the question whether p. (There is no
question of my applying a procedure for determining beliefs to something, and hence no
question of my possibly applying the procedure to the wrong thing.) If a judging subject
applies this procedure, then necessarily he will gain knowledge of one of his own mental
states: even the most determined sceptic cannot find here a gap in which to insert his
knife. (1982: 225)14

It is not clear what “this procedure” is (see section 5.2.2), but does the passage fit
the present Evans-inspired account? That is, is it true that following BEL (or trying to)
cannot fail to produce knowledge of one’s beliefs?

14
In the case of perception, which he contrasts with belief, Evans does deny that transparency
“produce[s] infallible knowledge” (1982: 228).
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

No, there is no guarantee that the beliefs produced by following BEL (or trying to)
will amount to knowledge: on rare occasions, Type III errors will be nearby
possibilities. However, BEL is significantly more likely to produce knowledge than
rules like DOORBELL. Privileged access is thereby explained.

5.4 Economy and detection


The theory of self-knowledge defended in this book is economical, inferential,
detectivist, and unified (section 1.4). The account for belief is clearly inferential.
The issue of unification will have to wait until Chapter 7. Is it economical and
detectivist?
Take economy first. Does the capacity to (try to) follow BEL solely draw on
epistemic capacities that are used in other domains?
Evans seems to have thought that his version of the transparency procedure
was economical, claiming that in the case of knowledge of one’s perceptual states it
“re-use[s] precisely those skills of conceptualization that [one] uses to make judg-
ments about the world” (1982: 227; quoted in section 1.2), which implies that those
“skills of conceptualization” are all that are needed. And in the case of belief, section
5.3.2 quoted Evans as saying I can “answer the question whether I believe that p by
putting into operation whatever procedure I have for answering the question
whether p” (1982: 225; emphasis added), which suggests economy even more
clearly. Evans did not appear to have BEL in mind, though (section 5.2.2).
We have a capacity for inference that is more or less indifferent to subject
matter: we can reason about the past and the future, the observed and the
unobserved, the mental and the material, the concrete and the abstract, and
often thereby increase our knowledge. More specifically, we have a capacity to
gain knowledge by reasoning from (strong) evidence. If the proposed inferential
account of knowledge of one’s beliefs involved inference from (strong) evidence,
then economy would be an immediate consequence. But, as discussed in
sections 5.2.4 and 5.2.5, it does not. If one can come to know that one believes
that p by inference from the premise that p, this knowledge is acquired either by
reasoning from (at best) weak evidence or from no evidence at all.
Not all reasoning is good reasoning, of course. Not infrequently, our patterns
of reasoning are (to borrow a memorable phrase from P. F. Strawson) “non
sequiturs of numbing grossness”: we affirm the consequent, commit the gam-
bler’s fallacy, indulge in wishful thinking, and so on. The fact that a pattern
of reasoning is bad does not mean that we avoid it. Similarly, that a pattern of
reasoning seems bad, on a little reflection, does not mean that we avoid it either.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. ECONOMY AND DETECTION 

A pattern of reasoning might strike a Logic 101 student as utterly hopeless, but
that does not mean that no one actually reasons in that way. BEL, in the guise of
Gallois’s doxastic schema, is actually a case in point. Bracketing the issue of
whether BEL is a good rule, it wouldn’t be especially surprising to find some
benighted souls following it. A touch of megalomania might lead one to think
that one believes all and only truths; more realistically, one might easily become
confused about the relation between its being true that p and one’s believing
that p, in effect treating them as equivalent by inferring one from the other.
After all, it requires some sophistication to see that the unacceptable Moore-
sentence ‘p and I don’t believe that p’ is actually consistent. Our capacity
for reasoning acknowledged on all sides brings in its train the capacity to
reason in accord with the doxastic schema. Hence the transparency account is
economical.
However, this is not economical enough for our purposes. Consider an
analogy. Reasoning from the premise that there are no Fs to the conclusion
that every F is G can seem bad—and did to Aristotle—even though contemporary
orthodoxy counts it valid. Imagine that Alexander disagrees with his teacher
Aristotle and claims that this sort of reasoning is knowledge-conducive. Alexan-
der has a (correct) theory about how we can come to know vacuous generaliza-
tions, namely by deduction from the corresponding negative existential premise.
Is his account economical? The mere fact that an inference seems bad does not
prevent us from making it, and—due perhaps to persuasion by Alexander
himself—an ordinary person surely could reason from the premise that there
are no centaurs to the conclusion that every centaur is three-legged. Our capacity
for reasoning acknowledged on all sides brings in its train the capacity to reason
from a negative existential premise to the corresponding vacuous generalization.
Hence Alexander’s account is economical.
To make the analogy closer, suppose now that Alexander claims not just that
the reasoning is good, but that it is widely employed. The problem for Alexander
is that the reasoning seems bad (even though it is actually good). Wouldn’t that
act as a natural check on our exercise of the capacity to reason in this way?
Perhaps some people exercise their capacity, but one should hardly expect nearly
everyone to exercise it.
The transparency account faces the same problem. All that has been shown
is that we have the capacity to (try to) follow BEL, or to reason in accord with
the doxastic schema. Since, on initial inspection, reasoning in accord with the
doxastic schema seems bad, there is a worry that our capacity will rarely be
exercised.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

The first point to emphasize is that the transparency account purports to


explain knowledge of one’s beliefs, not knowledge of one’s inferences. Knowing
that one believes that p is one thing; knowing that one has drawn this conclusion
from the premise that p is quite another. There is little evidence that knowledge of
one’s inferences is especially easy to come by, and mistakes are to be expected.
This is all to the good, for supposing that one’s inferences are evident, the
transparency account, if true, must also be evident. Since the account is not
evident, if one’s inferences are evident then the account is false.
Suppose that the transparency account is correct, and that by reasoning
transparently I know that I believe that p. In addition to the first-order belief
that p, I now have the second-order belief that I believe that p. I apply the
transparency procedure again with the result that I also know I have the
second-order belief. That is, I know that I believe that I believe that p. We may
further suppose that I know that the onset of my second-order belief postdates
the onset of my first-order belief. By what process did my second-order belief
arise? By an Armstrongian self-scanning mechanism? By inference? By some
mysterious non-causal process? Theoretical reflection on Evans’ remarks may
suggest that the answer is inference, but introspection itself draws a blank here.
Imagine I think reasoning from the premise that p to the conclusion that I believe
that p is absurd. ‘So,’ I say, ‘since I obviously know that I believe that p, I can’t
have reached that conclusion by transparent reasoning.’ By hypothesis, I am
wrong. Despite my opinion that transparent reasoning is bad, I exercise my
capacity for it all the time. In other words, there is no natural check on our
exercise of the capacity to reason transparently; our inferential habits may be
hidden from us.
Unlike knowing vacuous generalizations, self-knowledge really is useful, par-
ticularly so when interacting with others. For example, suppose that Alexander
would have Aristotle killed if he discovered that Aristotle believes he is planning
to attack the Persians. If Aristotle has a piece of self-knowledge—specifically, if he
knows that he believes that Alexander is planning to attack—then he is in a
position to take the proper precautions.15 It would thus be no surprise if
following BEL is widespread. The transparency account is economical in the
required sense: we have the capacity to follow BEL, and there is at least no obstacle
to that capacity being widely exercised.

15
‘Knows’ rather than ‘believes’ is more natural in this sort of example, in which case Aristotle
needs to know that he knows that Alexander is planning to attack—this will be treated shortly
(section 5.5.1).
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. ECONOMY AND DETECTION 

The issue of detectivism is more straightforward. First, given that BEL is the chief
route to knowledge of what one believes, it is very plausible that there is no
constitutive dependence between being a believer and having the capacity to find
out what one believes. For convenience, we can take that plausible claim to be part
of the official transparency account. Second, causal mechanisms play an essential
role in the acquisition of knowledge via BEL. Suppose I come to believe (and know)
that I believe that it’s raining by reasoning from the premise that it’s raining.
Inference is a causal process: my belief that I believe that it’s raining was caused by
my belief that it’s raining. In other words, the fact that I believe that I believe that it’s
raining was caused by the fact that I believe that it’s raining. When one gains
knowledge by following BEL (or trying to), the known fact causes the higher-order
belief. The account thus has the two components that define detectivism.
Here a comparison with Nichols and Stich’s version of the inner-sense theory
will be helpful. They assume that there is a “language of thought” (Fodor 1975),
and use Schiffer’s metaphor of the Belief Box (1981) in which sentences in the
language of thought are inscribed. To believe that p is to have a sentence of
“Mentalese” in one’s Belief Box that means that p. Given this picture, Nichols and
Stich explain the mechanism of inner sense as follows:
To have beliefs about one’s own beliefs, all that is required is that there be a Monitoring
Mechanism (MM) that, when activated, takes the representation p in the Belief Box as
input and produces the representation I believe that p as output. This mechanism would
be trivial to implement. To produce representations of one’s own beliefs, the Monitoring
Mechanism merely has to copy representations from the Belief Box, embed the copies in a
representation schema of the form I believe that___, and then place the new representations
back in the Belief Box. The proposed mechanism (or perhaps a distinct but entirely parallel
mechanism) would work in much the same way to produce representations of one’s own
desires, intentions, and imaginings. (Nichols and Stich 2003: 160–1)16

Nichols and Stich’s claim that the Monitoring Mechanism would be “trivial to
implement” seems to be the result of taking the Belief Box metaphor a little too
seriously, but this point can be left aside.17 For our purposes the important point

16
‘The Mentalese representation I have a beer’ is of course shorthand for ‘the language-like
neural representation that means that I have a beer.’
17
To say that a representation R is “in the Belief Box” is to say that R has a certain complex
functional/computational property (which in the present state of ignorance cannot be characterized
much further). So, “to produce representations of one’s own beliefs, the Monitoring Mechanism has
to copy” representations that have such-and-such functional/computational property, “embed the
copies in a representation schema of the form I believe that___,” and then ensure that the new
representations have such-and-such functional/computational property. Whether this mechanism
would be “trivial to implement” is not something we are now in a position to know. For similar
reasons, Nichols and Stich have no explanation of privileged access.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

is that the MM theory is detectivist but extravagant, hence like the transparency
account in the first respect but unlike it in the second. It is extravagant precisely
because the causal relation between (say) the first-order belief that p and the
second-order belief that I believe that p is not secured by the exercise of a general
epistemic capacity needed for other domains. In particular, it is not secured by
a capacity for inference. On Nichols and Stich’s view, the transition from
“the representation p in the Belief Box” to “the representation I believe that p”
(also in the Belief Box) is no kind of inference at all. It is this feature that allows
their theory—unlike the transparency account—to generalize easily to all other
mental states.

5.5 Extensions
This section considers three extensions: knowing that one knows, knowing that
one lacks a belief, and knowing that one confidently believes.
5.5.1 Knowing that one knows
According to the “KK principle,” if one knows that p, one knows (or at least is in a
position to know) that one knows that p. The principle is not much in favor these
days, largely because of the arguments against it in Williamson 2000.18 None-
theless, often one knows that one knows. I know that I know that it’s raining, for
example. How do I know that? Adapting the famous passage from Evans on the
epistemology of belief, quoted in section 1.3: in making a self-ascription of
knowledge, one’s eyes are, so to speak, or occasionally literally, directed
outward—upon the world. If someone asks me ‘Do you know that it’s raining?’
I must attend, in answering him, to precisely the same outward phenomena as
I would attend to if I were answering the question ‘Is it raining?’ This sounds
equally plausible as the original claim about belief, suggesting that I know that
I know that it’s raining by following the rule:
KNOW If p, believe that you know that p.
Recall that when one follows a rule R, one recognizes (hence knows) that
conditions C obtain. So following KNOW implies that one knows that p, and
thus KNOW, like BEL, is self-verifying. BEL is also strongly self-verifying: if one tries

18
Williamson argues for the failure of the KK principle from the “safety” conception of
knowledge. However, if we do come to know that we know by following KNOW, Williamson’s
argument seems vulnerable, and a (qualified) version of KK might be correct (Das and Salow 2016;
see also McHugh 2010).
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. EXTENSIONS 

to follow it, one’s belief about one’s beliefs is true. KNOW, of course, lacks
this feature: if one tries to follow KNOW, one’s belief that one knows that p may
well be false.
Relatedly, BEL scores better on the dimension of safety. However, there are
many ways in which one’s belief that one knows that p could have been wrong:
specifically, only Type II errors are impossible. (See section 5.3.2.) Unsurpris-
ingly, it is easier to be wrong about whether one knows that p than about whether
one believes that p. Still, even KNOW compares favorably with DOORBELL, which is
subject to errors of all three types.

5.5.2 Knowing that one does not believe


One doesn’t just have knowledge of what beliefs one has, one also has knowledge
of what beliefs one lacks. For example, I know that I lack the belief that there are
an odd number of words in The Varieties of Reference. A comprehensive account
of knowledge of one’s beliefs should include these cases too.
In fact, the quotation from Evans in section 5.3.2 suggests that the transpar-
ency procedure already has the absence of belief covered:

I get myself in a position to answer the question whether I believe that p by putting into
operation whatever procedure I have for answering the question whether p.
(1982: 225; emphasis added)

But this is a slip, as Sosa in effect pointed out (2002: 275–6). Sometimes I do not
believe that p because I have not made up my mind: I neither believe that p nor
believe that not-p. In this sort of case, I do not have an answer to the yes-no
question Is it true that p? The transparency procedure would only work here if
the world itself hadn’t made up its mind, or at least appeared to me that way:
glancing outward, I see that there is no fact of the matter whether p, and conclude
that I have no opinion either way. Even if one can make sense of there being no
fact of the matter whether p, this is obviously of no help. Usually when I have not
made my mind up there is a fact of the matter, as with the example of the number
of words in Varieties.
So how does one know that one does not believe that p?19 There are two kinds
of case. First, one believes that not-p; second, one neither believes that p nor

19
This is an instance of the schematic question How does one know that one is not in mental
state M?, which any theory of self-knowledge needs to address (cf. Stoljar 2012). However, to keep
the discussion focused on the central issues, other instances of the schematic question will not be
explicitly considered in this book.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

believes that not-p. The first case is the easiest. If one believes that not-p, it is a fair
bet that one does not believe that p. So a plausible rule is:
NOTBEL If not-p, believe that you do not believe that p.
Since sometimes one’s beliefs are contradictory, NOTBEL—unlike BEL—is not self-
verifying. Following NOTBEL could lead to error. But—far from being an
objection—this seems to be exactly what we want. Sometimes an application of
NOTBEL (she does not love me, therefore I do not believe that she loves me) can
indeed lead one astray.
The harder case is the second: one cannot follow NOTBEL, because one does not
believe that not-p. Consider a particular example. I have no opinion on whether
it’s raining in New Delhi; how do I know that I do not believe that it’s raining in
New Delhi?
Now I do know many things that are relevant to this question: I am
currently nowhere near New Delhi, I have not read The Times of India
today, or spoken by telephone with anyone living in New Delhi. And this
sort of Rylean evidence is, at least on many occasions, sufficient for me to
know that do not believe that it’s raining in New Delhi. We can summarize
my evidence by saying I know I am in a poor epistemic position as to whether
it’s raining in New Delhi. Thus a plausible hypothesis is that the second case is
covered by the rule:
NOVIEW If you are in a poor epistemic position as to whether p, believe that
you do not believe that p.
Exactly what sorts of cues fall under the umbrella term ‘poor epistemic position’
have (in effect) been examined by psychological research on the “feeling of
knowing.” They seem to be broadly Rylean, like the examples above (for a
summary, see Dunlosky and Metcalfe 2009: ch. 4). Since one’s access to one’s
poor epistemic position is not particularly privileged, self-knowledge that results
from following NOVIEW will not be particularly privileged either; neither will it be
very peculiar. Pace Sosa (2002: 276), this again seems to be a welcome result,
rather than an objection.20

20
False beliefs to the effect that one lacks a belief are commonplace:
‘Is Naypyidaw the capital of Myanmar?’
‘Sorry, I don’t know. I have no opinion either way . . . [a day passes] . . . Oh, I’ve just
remembered—it is Naypyidaw.’
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. EXTENSIONS 

5.5.3 Knowing that one confidently believes


Beliefs vary in strength: one may believe that p more or less firmly or confidently.
(Importantly, these are not the so-called “degrees of belief” found in Bayesian
epistemology.21) And we often have knowledge of the confidence we attach to
beliefs. For example, I know that I believe the battle of Stirling Bridge was in
1297 more confidently than I believe that my bike is where I left it a few hours
ago. I know that I am quite convinced that I live in Cambridge, and that my
belief that the Democrats will win is on the shaky side. How do I know these
things? Since the world itself is not unsure that my bike is where I left it (or sure,
for that matter), any more than the world itself is undecided about the number
of words in Varieties, some departure from the simple BEL model is to be
expected. That raises the concern that we have already reached the limits of
an economical approach to self-knowledge. Perhaps knowing how confidently
one holds a belief requires knowledge of a “feeling of confidence,” detectable
only by inner sense.
Why do beliefs vary in strength? Not all beliefs are created equal, even if we
restrict attention to those that amount to knowledge. I may know that Smith is a
murderer on the basis of fingerprints and blood drops, even though some of my
knowledge points in the opposite direction—Smith had no motive and an alibi,
say. In contrast, I may know that Jones is a murderer on a much stronger basis: in
addition to prints and blood, the murder was captured on video, Jones had
means, motive, and opportunity, and so forth. If I receive a letter claiming to
prove Jones’ innocence, I may fairly toss it in the wastebasket; it would be prudent
to take a similar letter pleading Smith’s case more seriously. Believing in Jones’
guilt more strongly than in Smith’s helps ensure that my dispositions are of this
appropriate kind.

21
The terminology of ‘degrees of belief ’ is unhappy, and the alternative terminology of
‘credences’ is much better: to have credence .5 in the proposition that this coin will land heads
is not to believe anything. Insisting that it is to believe that the coin will land heads “to degree .5” is
to substitute jargon for an explanation. On the difference between strength of belief and credences
(or subjective probabilities), see Williamson 2000: 99.
Knowledge of one’s credences is rarely examined (an exception is Dogramaci 2016). It is not
treated here because it is unclear whether we have credences in the first place, pace the vast literature
to the contrary (for skepticism, see Holton 2014).
The topic of this section (and others in this chapter) intersects with the burgeoning literature on
(non-human) animal metacognition. Partly for reasons of space, and partly because the extent (and
even existence) of animal metacognition is controversial, it is passed over here. For a skeptical
survey, see Carruthers and Ritchie 2012.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

One’s degree of confidence in a proposition need not just reflect the evidence
pro and con. Suppose I identified Smith’s prints and blood myself, and that
I know that my previous application of such forensic methods has not been
perfect. In fact, these are Smith’s prints and blood. Further, we may suppose
that I know that they are: some past mistakes are quite compatible with present
knowledge. The evidence of my imperfect reliability is not evidence against
Smith’s guilt. (If a thermometer reads 70 F, and I have evidence that it is
malfunctioning, causing it to read at random, this is no reason to believe that
the temperature isn’t 70 F.) Rather, it is evidence for my belief in Smith’s guilt
being improperly based, not amounting to knowledge. Even though I know that
Smith is guilty, it would be wise to continue the investigation. Perhaps I should
seek a second opinion, or try to find other kinds of evidence. A high degree of
confidence in Smith’s guilt would therefore be inappropriate, and in a case of
this kind my degree of confidence will indeed be on the low side. Evidence that
one’s belief is not knowledge can trickle down to the belief itself, lessening one’s
confidence in the proposition believed.
How do I know that my belief that the battle of Stirling Bridge was in 1297 is
held with high confidence? As with knowing that one lacks a belief, psychological
research, this time on “retrospective confidence judgments,” suggests a broadly
Rylean answer (for a summary, see Dunlosky and Metcalfe 2009: ch. 4). Items of
relevant evidence might include: (a) someone told me, (b) I have visited Stirling,
(c) I am generally familiar with Scottish history, or (d) I have a good memory for
the dates of battles. Alternatively, how do I know that my belief that my bike is
where I left it is held with low confidence? Items of relevant evidence might
include: (a) there are many bike thefts in this city, (b) I have a valuable bike,
(c) I misremembered where my bike was just yesterday, or (d) I didn’t lock it. For
short: my belief that the battle of Stirling Bridge was in 1297 has high epistemic
credentials, and my belief that my bike is where I left it has low epistemic
credentials. Since my confidence is correlated with (what I take to be) the belief ’s
epistemic credentials, a plausible hypothesis is that knowledge of confidence is
covered by the rule:
CONFIDENCE If you believe that p, and your belief has high (low) epistemic
credentials, believe that you believe that p with high (low) confidence.
CONFIDENCE and NOVIEW should not be read as requiring that their followers
think of their beliefs as epistemically credentialed, or think of themselves as being
in a poor epistemic position: these are just schematic terms, to be filled in with
what may well be a grab-bag of cues and heuristics, varying from occasion to
occasion.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. OBJECTIONS 

CONFIDENCE and NOVIEW are not neutral rules, and so following them requires
a prior capacity for self-knowledge. However, by the end of this book, it should be
plausible that this prior capacity has been explained economically.

5.6 Objections
This section takes up four objections against the view that we typically follow
(or try to follow) BEL in determining what we believe.22

22
Here are three other objections that are probably less salient than the four treated in the main text.
Shoemaker (2011) objects that BEL as stated is not “a rule we can be said to follow,” because a rule
“tells one to . . . perform an act” and believing is not an act. He then reformulates BEL as BEL*: “If p,
judge that you believe that p” (240). This is a rule we can be said to follow but, Shoemaker says, “one
can have the standing belief that one believes something without occurrently judging, or having ever
occurrently judged, that one does, so appeal to BEL* does not explain how we have or are justified in
having such standing beliefs” (240). In reply, first note that the official explanation of “following a
rule” (section 5.2.1) does not presume that believing is an act, and so—at least in our stipulated
sense—there is no barrier to following BEL. Still, as in effect noted in section 5.2.1, if “occurrent
judgment” is simply the onset of the standing state of belief, there is no harm in reformulating BEL as
BEL*. But then there is no reason to suppose that one can have “the standing belief that one believes
something without . . . having ever occurrently judged, that one does,” so Shoemaker’s objection fails.
If, on the other hand, Shoemaker intended “occurrent judgment” to be something more loaded
then—pending further argument—there is no reason to suppose that we follow BEL*.
Valaris (2011; see also Barnett 2016) notes that BEL-style reasoning should not be used hypothet-
ically, in the scope of a supposition, unlike modus ponens. He then argues as follows:
[I]n transparent self-knowledge the subject comes to believe that she believes that p by
inference from p. That is, she comes to believe that she believes that p on the grounds
that p. But this cannot be correct. It cannot be correct, because in that case it should be
possible for the subject to make the same inference while merely assuming that p for
the sake of the argument. In other words, it should be possible to use the doxastic
schema in hypothetical reasoning. Since we cannot use the doxastic schema in
hypothetical reasoning, it follows that it is not the case that, in transparent self-
knowledge, the subject believes that she believes that p on the grounds that p.
Transparent self-knowledge is not inferential. (323)
The problem with this argument is the claim that the subject believes that she believes that p “on
the grounds that p,” a locution on which Valaris places considerable weight. On one interpretation,
this means that the fact that p is adequate evidence for the conclusion that the subject believes that
she believes that p, in which case Valaris is correct to conclude that the inference should work
equally well in hypothetical reasoning. But, of course, on this interpretation the subject doesn’t form
her second-order belief on the grounds that p, so the quoted argument fails at the second sentence
(see section 5.6.1 immediately below). On an alternative interpretation, talk of “grounds” is just a
way of emphasizing that the subject draws her conclusion “without considering her own mental
states at all” (322); in particular, without considering whether she believes that p. But on this
interpretation Valaris is incorrect to conclude that the inference should work equally well in
hypothetical reasoning, so the quoted argument fails at the fourth sentence.
Smithies and Stoljar (2011) offer the example of:
WATER If x is composed of H2O, believe that x is composed of water,
to show that “the mere fact that a rule is self-verifying is not sufficient to explain our entitlement to
follow it” (14), which basically amounts to the correct point that a self-verifying rule need not be
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

5.6.1 The inference is mad (Boyle)


According to Boyle, “[t]he basic reason to reject the idea that I infer a fact about
my own psychology from a fact about the world is just this: the inference is mad”
(2011: 230). He elaborates:
Suppose for the sake of argument that I have arrived at the belief that I believe P by
inferring according to [the] ‘doxastic schema’:
P
I believe P
To believe that I believe P is to hold it true that I believe P. Being a reflective person,
I can ask myself what grounds I have for holding this true. The answer ‘P’ is obviously
irrelevant. I am asking what shows that the proposition I believe P is true, and a modicum
of rational insight will inform me that, even if it is true that P, this by itself has no
tendency to show that I believe it. What would support my conclusion, of course, is the
fact that I, the maker of this inference, accept the premiss that P. But to represent that as
my basis would be to presuppose that I already know my own mind on the matter, and
that would undermine Byrne’s account.
. . . I can put my objection in a provisional way by saying: a belief, once formed,
doesn’t just sit there like a stone. What I believe is what I hold true, and to hold
something true is to be in a sustained condition of finding persuasive a certain view
about what is the case. Even if we grant that a disposition to pass from one content to
another could deposit various arbitrary beliefs in my mind, those beliefs would be
unsustainable if I, understanding their contents, could see no reasonable basis for
holding them true.
We should, moreover, question the idea that inference is merely a reliable process
that deposits beliefs in my mind. A (personal-level) inference is not a mere transition
from a stimulus to a response; it is a transition of whose terms I am cognizant, and
whose occurrence depends on my—in the normal case: persistently—taking there to be
an intelligible relation between these terms. This is what makes it possible for an
inference to leave me with a sustainable belief: I can reflect on why I draw a certain
conclusion, and when I do, I can see (what looks to me to be) a reason for it. It is hard to
see how the premiss of Byrne’s doxastic schema could supply me with a reason to draw
its conclusion.
Byrne’s inferential approach to doxastic transparency thus appears to face a dilemma: it
must either represent the subject as drawing a mad inference, or else must admit that her
real basis for judging herself to believe P is not the sheer fact that P, but her tacit

knowledge-conducive. However, the crucial claim is that a self-verifying rule like BEL produces safe
beliefs. The first-pass definition of a safe belief given earlier, as one “that could not easily have been
false,” has the result that a belief in a necessary truth is automatically safe. This is a fatal problem if
knowledge is (non-reductively) identified with safe belief. When the earlier definition of safety is
adjusted to accommodate this and other problems (see, e.g., Manley 2007), WATER (unlike BEL) turns
out not to produce safe beliefs, despite being self-verifying.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. OBJECTIONS 

knowledge that she believes P. The second horn of this dilemma should be unacceptable to
Byrne: embracing it would mean giving up on his project. (2011: 230–1)

As Boyle says in his final paragraph, to take the second horn of the dilemma
would be to reject the transparency account. We may thus concentrate on the
first horn. Imagine, then, that I have concluded that I believe that p from the
premise that p. Imagine, further, that I have applied the doxastic schema to
the conclusion, and further concluded that I believe that I believe that p. I may
then ask myself why I believe that I believe that p; specifically, what (distinct)
reasons or evidence do I have for believing it? As Boyle says, the answer ‘p’ is
“obviously irrelevant,” and there seem to be no other candidates. I am thus forced
to admit that my belief about my own belief lacks any discernable support. But,
far from being a problem, this seems to be exactly what we want. The view that
self-knowledge is (often) not based on evidence is frequently taken as a starting
point for any theory, from Armstrong to Moran. (Recall the quotation from
Davidson in section 3.2: “self-attributions are not based on evidence.”) Further,
the fact that I can find no evidence in support of my belief will not, by itself, lead
me to give it up. Perhaps I can find no evidence in support of my belief that
the battle of Stirling Bridge was in 1297, but I may (rightly) continue to hold it
nevertheless.
That the transparency account agrees with Davidson that knowledge of one’s
beliefs is unsupported is a point in its favor, not a strike against it. And in any
case, if this is Boyle’s complaint, it is targeted very widely—so widely, in fact, as
to include his own theory of self-knowledge. His objection must, then, be
something else.
Boyle’s penultimate paragraph suggests that the problem, as he sees it, is not
(merely) that my belief that I believe that p is not derived from adequate evidence,
but rather that it is derived from inadequate evidence. If the objection is simply
this, then it has already been answered (sections 5.2.4 and 5.2.5), but Boyle thinks
the first-person perspective brings out a deeper difficulty. Specifically, when I ask
myself what grounds I have for drawing the conclusion that I believe that p, a little
reflection appears to tell me that my sole premise was that p, which is manifestly
not a reason, or item of evidence, that supports my conclusion. I am thus like
someone who realizes that she has drawn the conclusion that q from the
manifestly inadequate premise that either q or r—on discovering her mistake,
she will cease to believe that q. In fact, my position seems to be even worse. My
provisional conclusion is that my belief that I believe that p was based on an
inadequate premise, in which case that second-order belief will evaporate. But
how did I come to know that I had this second-order belief in the first place?
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

A little more reflection appears to tell me that I drew the conclusion that I believe
that I believe that p from the sole premise that I believe that p, which is again
manifestly inadequate. Wait—how did I come to know that I had this third-order
belief, the belief that I believe that I believe that p? Again, a little reflection
appears to tell me that its credentials are equally miserable, so this third-order
belief should evaporate in its turn. The transparency account thus has a kind of
instability: if I am rational and reflective, then my self-knowledge of my beliefs
will progressively diminish, without end. It is not straightforward to get from
such instability to the falsity of the transparency account, but at least for the sake
of the argument we can take the instability to be unacceptable.
The first point to emphasize (recall section 5.4) is that first-person reflection
does not tell me that I have inferred that I believe that p from the premise that p.
My monologue above was misleading: I should not have said that “a little
reflection appears to tell me that my sole premise was that p.” If there is
instability, it is not generated by examining my own mind.
This does not completely evade the objection, however. Granted that first-
person reflection is not going to exhibit the instability, what about that combined
with a dose of theory—as, say, presented in this chapter? Suppose my first-person
and theoretical reflections combined “tell me that my sole premise was that p”:
wouldn’t that lead to a rational demolition of my self-knowledge along the same
lines? No. As argued earlier, this is a good inference in the sense of being
knowledge-conducive; Boyle has not directly attacked these earlier arguments.
Applying this to the present case, if I am rational I will conclude that although my
second-order belief is unsupported, that is no reason for ditching it. There is
consequently no instability.23

5.6.2 There is no inference (Bar-On)


According to Bar-On, the transparency procedure is “too epistemically indirect”
(2004: 113). This is because avowals (psychological self-ascriptive utterances such
as “I believe that it’s raining”) are “baseless”: in “issuing an avowal . . . [i]n the
normal case . . . I do not reason, or draw some inference . . . Avowals are appar-
ently non-evidential” (2). She explains the conflict with transparency as follows:
Avowals, it is often observed, do not seem to be made on any basis. Yet, on Evans’s
Transparency View, we are to take them to be self-judgments that are arrived at on the
basis of consideration of the relevant worldly items. (2004: 113)

23
For further discussion of Boyle’s “mad inference” objection, see Setiya 2012.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. OBJECTIONS 

The puzzle of transparency shows the need for a distinction here, which talk of
“baselessness” can obscure. An item of knowledge is non-inferential just in case
it is not the result of inference or (theoretical) reasoning, which (as noted in
section 1.4) is assumed to involve a broadly causal process of transitions
between belief states. Recall from section 3.2 that an item of knowledge is
unsupported just in case it is not the result of reasoning from adequate evidence;
equivalently, it is not based on adequate evidence. If an item of knowledge is
non-inferential, it is unsupported, but one of the main points of this chapter is
that the converse does not follow: knowledge may be gained by reasoning from
inadequate evidence.
Put in terms of this distinction, Bar-On’s complaint against the “Transparency
View” is that one’s knowledge that one believes that it’s raining is (typically) non-
inferential, not that it is unsupported. (Indeed, Bar-On herself agrees that such
knowledge is unsupported.)
Consider a typical case where one believes that one believes that it’s raining.
Since Bar-On’s complaint does not presume anything about the reliability of such
second-order beliefs, we may suppose that one’s second-order belief is true—one
does believe that it’s raining. Is one’s second-order belief the result of inference,
and hence the product of a causal transition between belief states?
Suppose that one’s second-order belief is the result of a transparency-style
inference, from the ostensible fact that it’s raining to the conclusion that one
believes that it’s raining. Then the relevant causal transition is between the first-
order belief that it is raining, and the second-order belief that one believes that it’s
raining. Presumably Bar-On’s worry is that this necessary condition for
inference—the causal transition between belief states—is absent. Hence Bar-On
and the transparency theorist both agree that one has the second-order belief and
the first-order belief, but disagree about whether the second-order belief is caused
by the first. Once this is made explicit, it is hard to see why Bar-On has given the
transparency theorist much cause for concern. Granted, it is unobvious that there
is a causal transition—but it is equally unobvious that there isn’t.

5.6.3 The account conflates do believe and should believe (Bar-On)


Bar-On has a second complaint:
I am really convinced right now that the Democrats will win the elections, perhaps for no
good reason. But if I were to consider the question on its merits—Will the Democrats win
the election?—I might, more reasonably, declare that I believe the Republicans will win.
I will thereby be falsely ascribing to myself a belief I do not have . . .
Looking at the world, and directly assessing relevant states of affairs and objects, would
seem to be a secure way of determining what is to be thought about p, . . . and so on. But,
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BELIEF

on the face of it, the question about avowals’ security concerns our ability to tell what we
are in fact—as opposed to what we should be—thinking. (2004: 119)

Realistic detail may be added to Bar-On’s story by stipulating that I know


I believe that the Democrats will win. So, apparently, I have not arrived at this
conclusion by following the transparency procedure. For if I had, and considered
the question “on its own merits,” then I would have falsely concluded that
I believe that the Republicans will win.
Suppose I am “really convinced” that the Democrats will win. Trying to follow
BEL, I may thus reason from the premise that the Democrats will win to the
conclusion that I believe that the Democrats will win. And that conclusion is,
of course, correct. Suppose, alternatively, I decide to give the matter of the
forthcoming elections more thought. I “consider the question on its merits,”
and after further investigation determine that, after all, the Republicans will win.
So I have changed my mind: I now believe that the Republicans will win. That the
Republicans will win is now available to me as a premise. Trying to follow BEL,
I conclude that I believe that the Republicans will win. I am not “falsely ascribing
a belief to myself”: as before, my conclusion is correct.24

5.6.4 The account fails when one lacks a belief (Gertler)


A closely related objection to that in section 5.6.3 is due to Gertler. Suppose I am
faced with a box. I have no idea whether it contains a beetle. Typically, in such a
situation I will know, or be in a position to know, that I lack the belief that there is
a beetle in the box. In general:
(1) If (at t1) I do not believe that p, and I happen to wonder whether I believe that p, I will
not (at t2) self-attribute the belief that p. (Gertler 2011: 128; thesis renumbered)

However, Gertler argues that the transparency procedure cannot respect (1):
To see this, let us examine in more detail how BEL operates. Essentially, the rule consists of
two steps.
Step One: Try to determine whether p is true. If it is, move on to Step Two.
Step Two: Believe that you believe that p.
If I try to follow this rule, and conclude that p, the first step will result in my judging that p.
But of course I may have had no belief about whether p before undertaking this procedure.
In that case, the procedure will help to bring about my belief that p. So use of the rule will
contribute not only to the justification or warrant component of the resulting

24
See also the discussion of Moran in section 3.3.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. OBJECTIONS 

knowledge—that I believe that p—but also to its truth component. After all, ‘I believe that
p’ might have been false prior to my use of the rule. (2011: 129)

Return to the box. I wonder whether I believe that there is a beetle inside it.
Applying Step One of the transparency procedure, I first try to determine whether
it contains a beetle. The easiest way of doing this is to open the lid. Indeed, there
is a beetle inside, so by Step Two I conclude that I believe that there is a beetle
inside. Since this is correct, there’s no problem there. The problem, rather, is that
this is obviously not what happens in such cases. I won’t open the lid, and will
instead conclude that I lack the belief that there is a beetle in the box.25 Gertler
has, in effect, raised the issue of how I know that I have no opinion. If BEL is my
only resource, then I am stumped. But it isn’t: NOVIEW can come to the rescue
(section 5.5.2).
The transparency account fits belief and knowledge; the next order of business
is to examine whether it fits perception too.

25
Cf. Shah and Velleman 2005: 16, and the discussion in Moran 2011: 220–4.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

6
Perception and Sensation

If I descry a hawk, I find the hawk but I do not find my seeing of the hawk.
My seeing of the hawk seems to be a queerly transparent sort of process,
transparent in that while a hawk is detected, nothing else is detected answer-
ing to the verb in ‘see a hawk.’
Ryle, The Concept of Mind

6.1 Introduction
Chapter 5 defended the transparency account for knowledge and belief: one may
(and typically does) come to know what one believes and knows by “directing
one’s eyes outward—upon the world,” in Evans’ phrase. Specifically, one may
acquire knowledge of what one believes by following, or trying to follow:
BEL If p, believe that you believe that p.
And one may acquire knowledge of what one knows by following:
KNOW If p, believe that you know that p.
Chapter 5 also argued that this inferentialist account of self-knowledge has three
features. First, it explains both privileged access (for belief) and peculiar access
(for belief and knowledge). The key to the explanation of privileged access is the
observation that BEL is self-verifying (if one follows it, one’s higher-order belief is
true) and also strongly self-verifying (if one tries to follow it, one’s higher-order
belief is true). Second, the account is economical: it explains self-knowledge in
terms of epistemic capacities and abilities that are needed for knowledge of other
subject matters. Third, the account is detectivist: broadly causal mechanisms play
an essential role in the acquisition of self-knowledge.
As mentioned in Chapter 1, belief and knowledge are not the only initially
plausible candidates for this sort of treatment—perception is too. Imagine Gilbert
Ryle, out for a stroll, pausing to descry a hawk sitting on a nearby fence post. Here
am I, looking at Ryle and the hawk. To me, Ryle’s seeing of the hawk is a
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. PERCEPTION 

perceptually manifest fact, as is the fact that the hawk is on the fence post. My
own seeing of the hawk, on the other hand, is quite a different matter. It seems, as
Ryle puts it, to be “a queerly transparent sort of process.”1 I see Ryle and note that
his gaze is hawkwards; I do not see myself, or my eyes. Moreover, it does not ring
true to say that I discover that I see the hawk by some special introspective sense.
There is no switch in attention—say, to myself or to a “visual experience”—when
Ryle asks me, ‘Do you see the hawk?’ I answer by attending to the hawk. (Indeed,
if I attend to something else, I might well give the wrong answer.2)
Chapter 4 defended the idea that there is an equally good prima facie case for a
transparent epistemology of bodily sensation; otherwise put, the puzzle of trans-
parency is equally pressing for both perception and sensation. This chapter
extends the transparent epistemology of belief and knowledge given in
Chapter 5, first to perception and then to sensation.
It will turn out that these two topics are very closely related. According to a
widespread view, to have a bodily sensation—a headache, a tickle in one’s
throat, an itch on one’s neck—is to perceive something occurring in one’s
body. Evidence supporting this perceptual theory of sensation will be given
later. The epistemology of one’s bodily sensations thus becomes a special case
of the epistemology of one’s perceptions, and the basic transparency account
for perception applies.3 As one might expect, there are some complications.

6.2 Perception
The transparency account can be extracted from Evans’ brief remarks about the
“self-ascription of perceptual experiences” (1982: 226), quoted in Chapter 1:
[A] subject can gain knowledge of his internal informational states [his “perceptual
experiences”] in a very simple way: by re-using precisely those skills of conceptualization
that he uses to make judgements about the world. Here is how he can do it. He goes
through exactly the same procedure as he would go through if he were trying to make a
judgement about how it is at this place now . . . he may prefix this result with the operator
‘It seems to me as though . . . ’. (227–8)

1
Ryle then goes on to claim that “the mystery dissolves when we realize that ‘see,’ ‘descry,’ and
‘find’ are not process words, experience words, or activity words . . . The reason why I cannot catch
myself seeing . . . is that [this verb is] of the wrong type to complete the phrase ‘catch myself ’ ” (1949:
152). Since the mystery can be stated without falsely assuming that ‘see’ is a “task verb” (Ryle’s
phrase) like ‘run’ and ‘aim,’ Ryle’s proposed solvent does not work.
2
With the defensible assumption that one may see an object without attending to it (see Block
2013).
3
Cf. Evans 1982: 230–1.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

Here Evans is concerned with knowledge of how things perceptually appear. But
the point is evidently supposed to apply to knowledge of what one sees. The
subject may prefix a phrase referring to an object in the scene before his eyes
(‘a hawk’) with ‘I see.’
Although the quotation has the subject attaching a sentential operator to a
sentence, presumably Evans did not mean to tie knowledge of one’s perceptual
states to language. Recast in nonlinguistic terms and restricted to the case of
seeing an object, the procedure suggested by the quotation is that one can come
to know that one sees an object by an inference whose sole premise concerns
one’s (typically non-mental) environment, “how it is at this place now,” as Evans
puts it. (In fact, this is not Evans’ view. This will become clear in section 6.2.9,
where the elision in the quoted passage from Evans is filled in.)
If we remain similarly coy for the moment about the premise, this inference—
applied to our running example of seeing a hawk—can be set out as follows:

It is thus-and-so at this place now.


———————————
I see a hawk.

The puzzle of transparency, in this case, is that whatever “thus-and-so” turns out
to be, it will have nothing to do with me or perception, so how can this inference
possibly yield knowledge? Put in terms of the apparatus of epistemic rules
explained in Chapter 5, on the transparency account I come to know that I see
a hawk by following a rule of this sort:
HAWK If it is thus-and-so at this place now, believe that you see a hawk.
And the puzzle of transparency is that since the antecedent merely concerns the
scene before my eyes, which has no relevant connection either to me or to vision,
HAWK must be a bad rule.
Now we have already seen how to solve the puzzle of transparency for belief
and knowledge: BEL and KNOW yield safe beliefs, beliefs that could not easily have
been false. Will a solution along similar lines work for HAWK? To answer that
question, the template ‘thus-and-so at this place now’ needs to be filled in. And as
soon as we try to do that, another—potentially more serious—objection is
apparent.

6.2.1 The amodal problem


A first thought is to fill in the ‘thus-and-so’ along these lines:
HAWK{ If there is a hawk over there, believe that you see a hawk.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. PERCEPTION 

One might hope that the explanation previously given of the virtues of KNOW will
smoothly carry over to HAWK{. Suppose I follow HAWK{, and so know that there
is a hawk there. Does the fact that I have this knowledge make it likely that I see a
hawk? If so, then the virtues of HAWK{ at least approach those of KNOW, and the
puzzle of transparency for perception is not obviously intractable.
However, on second thought this defense of HAWK{ is hopeless. There are
numerous ways of knowing that there is a hawk there that do not involve
currently perceiving the hawk, let alone seeing it. So the probability that I see a
hawk, given that I know that there is a hawk there, is low. To conclude that I see
a hawk is to take a stab in the dark.
Suppose we try inserting the subject into the antecedent:
HAWK{{ If there is a hawk right in front of you, believe that you see a hawk.
This sort of maneuver certainly helps increase the probability that I see a hawk,
conditional on my knowing the antecedent. But again, there are many other non-
visual ways in which I might know the antecedent. I might hear the hawk, or see a
sign marked ‘Hawk Aviary.’ To conclude that I see a hawk is to take a stab in the
dusk. A little better, but not good enough. What’s more, I clearly do not follow
either of the above rules: if I know that there is a hawk right in front of me, that by
itself does not make me remotely inclined to conclude that I see it.
The root of the difficulty is that information does not wear its provenance from
a particular sensory modality on its face—information is amodal. Perhaps the
first-person epistemology of belief and knowledge is transparent. But the amodal
nature of information, it might be thought, shows that perception is where this
idea irretrievably runs into sand.
What are the alternatives?

6.2.2 Alternatives to transparency


According to the transparency account, I know that I see a hawk by an inference
from a single premise about the hawk-infested landscape beyond. There two
main alternative options.
Option 1 is that no premise about my environment is needed: I know that I see
a hawk without appealing to evidence concerning the scene before my eyes. Since
any such environmental evidence will be gathered perceptually, we can put
option 1 as follows: I know non-observationally that I see a hawk. (This should
not be taken to preclude my knowing that I see a hawk by “inner sense” (see
Chapter 2).) Option 2 is that although a premise about my environment is
needed, it is not enough: additional mental evidence is required. Let us take
these in turn.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

6.2.3 Option 1: non-observational knowledge


Option 1, that I know non-observationally that I see a hawk, requires immediate
amendment. First, note that this does not apply to every case of knowing that
I see a hawk, because sometimes an environmental premise is plainly needed:
I know that I see that bird (pointing to a hawk perching atop a distant tall tree),
but I am not in a position to know that I see a hawk. Ryle is passing by, and
informs me that the bird is a hawk; with this environmental premise in hand,
I conclude that I see a hawk. Second, extending this first point, perhaps one can
never know non-observationally that one sees a hawk—all such knowledge is
based on evidence that one sees such-and-such, and that such-and-such is a
hawk, with the latter item of evidence being known observationally. (Knowing
something by testimony counts as a case of observational knowledge.) So a more
careful and general statement of option 1 is as follows: knowledge that one sees an
F/this F is either non-observational, or else based on evidence that includes the
fact that one sees a G/this G, known non-observationally.
If there is any non-observational knowledge of this sort, knowledge that one
sees this red spot (pointing to a clearly visible red spot) is an example, or so we
may suppose. Since the fact that one sees this red spot entails that this spot is red,
one may come to know that this spot is red by inference from the fact that one
sees this red spot. Now one may also know that this spot is red simply by looking
at it—an animal with no conception of seeing could use its eyes to know that this
spot is red. So no knowledge that one sees this red spot is necessary. Thus, on this
view, there are two routes to the same conclusion: one may know that the spot is
red twice over, by inference from a non-observationally known fact about what
one sees, and by the more familiar method of simply using one’s eyes.
This result is more than strange. First, note that one may see what is, in fact, a
red spot, even though the spot does not look red (perhaps one is viewing the spot
in very dim light). One is not able to tell by looking that this spot is red, but one
might have various backup routes to that conclusion—perhaps one painted the
spot oneself from a can of red paint. However, the alleged non-observational
backup route is clearly inoperative: although it is true that one sees this red spot,
no amount of introspection will reveal this fact. The obvious explanation is that
the information one obtains by vision about the spot is somehow used to derive
the conclusion that one sees this red spot, but if that is right then option 1 must
be rejected.
Second, note that when one sees a red spot and believes both that this spot is
red and that one sees this red spot, it is not a possibility that two spots are in play.
Could this red spot be a different spot from this red spot that one sees? That is not
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. PERCEPTION 

a serious question, but if one knew non-observationally that one sees this red
spot, it apparently would be. Return to the situation in which one views this
red spot in dim light. Suppose one remembers that one painted this spot red; on
occasion, one might reasonably wonder whether one’s memory was quite
accurate—perhaps one painted another spot red, not this very spot. As before:
the obvious explanation of why the identity of the spot is never in question is to
say that the visual information about this spot is used to derive the conclusion
that one sees it.
Finally, if I know non-observationally that I see this red spot, then certain
dissociations are to be expected. In particular, one’s vision and reasoning cap-
acities might be working perfectly normally, while the mechanism that yields
non-observational knowledge that one sees this red spot is broken or absent.
One’s only means of finding out that one sees a red spot would then be similar to
third-person cases: one knows that one sees a red spot because one knows that
there is a red spot right there, that the light is good, that one’s eyes are open, and
so forth. Often one knows through vision about an object’s location and other
features, but is unsure whether someone else sees it (perhaps one does not know
that the person’s gaze is in the right direction). Similarly, someone who only had
third-person access to her states of seeing would sometimes be in a state of
uncertainty about whether she saw an object, while quite certain (via her excellent
vision) about the nature of the object itself. It is safe to say that this bizarre
condition never occurs.4 Pending some explanation of why the non-
observational mechanism never fails in this way, this is a reason for thinking
that option 1 is incorrect.5

4
Block (1997a: 159) notes a close approximation in the empirical literature, the case of “reverse
Anton’s syndrome” described by Hartmann et al. 1991. The patient was initially diagnosed as blind
due to a stroke. Two years later he was found to have spared vision in a 30º wedge in both fields.
Anton’s syndrome patients deny that they are blind; this patient denied that he could see. At one
point he remarked that “you (the examiners) told me that I can see it, so I must be able to see it” (33).
However, the patient’s vision was far from excellent. He could read words, but with limited
accuracy (51% correct on a standard test). Strikingly, he was “unable to discriminate light from dark”
(37). The patient’s cognition was also impaired, with mild language and memory deficits. Further,
sometimes he used perceptual verbs in describing his condition: on a color-naming task, “he
maintained that he could ‘feel’ or ‘hear’ the color” (34). The correct description of the patient’s
predicament is unobvious. As Hilbert notes, “a certain amount of scepticism about the case is in
order” (1994: 449). It is also worth emphasizing (with Hartmann et al.) that reverse Anton’s
syndrome is not clearly documented in any other published case.
5
The bizarre condition is “self-blindness” with respect to seeing (see section 2.2.9). Note that
someone who is self-blind in this way is not the “super-duper blindsighter” of Block 1997b, who has
“blindsight that is every bit as good, functionally speaking, as [normal] sight” (409), except that the
resulting perceptual states lack “phenomenal consciousness.” The super-duper-blindsighter is thus,
as Block says, a “quasi-zombie” (409), or a “visual-zombie,” in something close to the usual sense of
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

6.2.4 Option 2, first pass: visual sensations


Since option 1 faces some serious objections, let us turn to option 2, that
additional evidence is required. And from a more traditional position in the
philosophy of perception, the need for such evidence is palpable. Seeing an object
is a matter of the object causing distinctive sorts of affectations of the mind, “visual
sensations.” It is thus natural to think of knowledge that one sees an object as
resting on evidence about both ends of this causal transaction—evidence about the
object coming from observation, and evidence about the sensation coming from
some other source.6 So, to know that I see a hawk, I need to know, inter alia, that
I am having a visual sensation. Such a sensation is an occurrence in my mind, not
on the fence post beyond, so no wonder peering at the hawk is not sufficient.
An analogy can clarify the traditional position further. I am holding a nettle,
and feel a stinging pain in my hand. How do I know the additional fact that the
nettle is stinging me (i.e. causing the pain)? It would be a mistake to investigate
the issue by concentrating solely on the nettle; rather, I need to attend to
something else entirely, namely the pain in my hand. Putting these two items
of evidence together—that I am holding a nettle, and that I have a pain in my
hand—I can conclude that the nettle is stinging my hand. That conclusion is not
entailed by my evidence, but in the circumstances my evidence strongly supports
it. Likewise, on the present suggestion, I can conclude that I see a hawk on the
basis of two items of evidence: the external non-psychological fact that a hawk is
present, and the internal psychological fact that a visual sensation is occurring.
(Note that placing a substantive restriction on the type of visual sensation would
not be advisable, since almost any kind of visual sensation could accompany
seeing a hawk—it could look blue, or cubical, or whatever.) Knowledge that one
sees an F, then, is obtained by following this rule:

‘zombie.’ The self-blind person, on the other hand, has perfectly normal vision, at least in the sense
that she sees what we see.
If one can see an object without being phenomenally conscious then the transparency account
defended in this chapter is seriously incomplete: it may explain how I know I see a hawk, but does
not explain how I know that my seeing of the hawk is phenomenally conscious. Here the project of
this book meets what Block once called “[t]he greatest chasm in the philosophy of mind—maybe
even all of philosophy” (2003: 165). On one side are those (like Block) “who think that the
phenomenal character of conscious experience goes beyond the intentional, the cognitive and the
functional” and those who deny it. The transparency account goes naturally with the deniers, so if
Block is right, transparency is at best not the whole story. This fraught issue can only be noted, not
addressed.
Dissociation problems also afflict option 2, but this will not be discussed further. Dissociations
will reappear in section 7.2.
6
Of course, on this view evidence about the object is itself derived from evidence about the
sensation or “sense-impression” (recall section 1.5).
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. PERCEPTION 

SEEi If an F is present and you are having a visual sensation, believe that
you see an F.
Evidently SEEi is a non-starter. Taking the existence and epistemology of “visual
sensations” for granted, on many occasions one knows that an F is present and
that one is having a visual sensation, yet one does not see an F. SEEi is thus a bad
rule. Moreover, we do not follow it. Suppose I see a sheep in a field; although no
hawk is in sight, I know that there is a hawk in the vicinity. I have no inclination
to follow SEEi and conclude that I see a hawk.7, 8
We can pass over attempts to add epicycles to SEEi, because the nettle analogy is
fundamentally defective. When I see a hawk I do not have a spectacular kind of
migraine headache whose only connection to the hawk is that it is caused by the
hawk. This is basically Ryle’s point when he observes that in the “unsophisticated
use of ‘sensation’” a typical case of seeing does not involve any sensations (1949:
228). One can know what stinging sensations are without knowing anything
about nettles, but insofar as the philosophical notion of a “visual sensation” is
intelligible, it is not likewise only externally related to its causes. Visual sensations
or, better, visual experiences are specified in terms of the portion of the external
world that they purportedly reveal. That is, when I look at the hawk and
recognize it as such, my visual experience is an experience of a hawk. Does this
reconception of visual sensations as visual experiences help rescue option 2?

6.2.5 Option 2, second pass: visual experiences of an F


Start by applying the reconception to SEEi:
SEEii If an F is present and you are having a visual experience of an F, believe
that you see an F.
This straightforwardly copes with the case where I see a sheep in a field and know
that there is a hawk in the vicinity, which I do not see. I do not have an experience
of a hawk, and so am not in a position to follow SEEii.
But what is it for a visual experience to be “of” a hawk? An influential
discussion of this question is in Searle’s 1983 book Intentionality. Searle writes:

7
As discussed in Chapter 7 (section 7.3.2), rules are generally defeasible—despite knowing that x
is a hawk, one might have additional evidence that prevents one following SEEi. But it is unclear what
the defeater might be in this case.
8
Another problem is due to the word ‘present’ in the antecedent. This prevents me from always
believing that I see a hawk, since I always believe that there are hawks somewhere. But ‘present’
excludes too much—in principle, I can see a hawk at any distance (cf. seeing a supernova) and also
readily know that I see it.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

I can no more separate this visual experience from the fact that it is an experience of a
yellow station wagon than I can separate this belief from the fact that it is a belief that it is
raining; the “of ” of “experience of” is in short the “of” of Intentionality. (Searle 1983: 39)

An experience of a hawk may be said to be “of” a hawk in the same way that a
belief about a hawk is “of ” a hawk. Experience, then, like belief, has intentionality:
my experience of a hawk and the belief that there is a hawk on the fence post are both
“of” or “about” a hawk. But the parallel, Searle thinks, is even closer. The belief that
there is a hawk on the fence post has propositional content, namely the proposition
that there is a hawk on the fence post. And likewise for visual experiences:
The content of the visual experience, like the content of the belief, is always equivalent to a
whole proposition. Visual experience is never simply of an object but rather it must always
be that such and such is the case. (40)

In the case of an experience of a yellow station wagon, “a first step in making the
content explicit,” Searle says, “would be, for example,
I have a visual experience (that there is a yellow station wagon there).” (41)9

On this (now widespread) content view, perceptual experiences have content, like
belief, hope, and other “propositional attitudes.” To a first approximation, one
may think of the content of the subject’s visual experience as the information (or
misinformation) delivered to the subject by his faculty of vision (cf. Armstrong
1968: 224). When the delivery is one of misinformation, the subject suffers a
visual illusion. Although this is somewhat controversial, the content view is at
least a huge advance over the sense datum theory, and the traditional view
mentioned in section 6.2.4.10
Assume, then, that visual experiences have contents, v-propositions; true
v-propositions are v-facts. Let ‘[ . . . F(x) . . . ]V’ be a sentence that expresses a
particular v-proposition that is true at a world w only if x is F in w. Read ‘You
V[ . . . .F(x) . . . ]V’ as ‘You have a visual experience whose content is the propos-
ition that [ . . . F(x) . . . ]V.’ Then a more explicit version of SEEii is:

9
Searle’s considered view is that the content is the proposition that “there is a yellow station
wagon there and that there is a yellow station wagon there is causing this visual experience” (1983:
48; see also Searle 2015: chs. 4, 5), which has attracted a lot of criticism. See, e.g., Burge 1991 and
Recanati 2007: ch. 17.
10
See, e.g., Armstrong 1968, Peacocke 1983, Tye 2000, Byrne 2001, Pautz 2010, Siegel 2010; a
recent collection pro and con is Brogaard 2014. An assumption of this book is that the content view
is correct. (However, the existence of “visual experiences” as Searle and many other philosophers
conceive of them is open to question: see Byrne 2009.)
The transparency account may well be salvageable if the content view is mistaken, but this will not
be explored here.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. PERCEPTION 

SEEiii If you V[ . . . F(x) . . . ]V and x is an F, believe that you see an F.


What exactly are v-propositions? Searle’s example—the proposition that there is
a yellow station wagon there—is at best a “first step,” as he says: it hardly begins
to capture the apparent scene before the eyes when one sees a yellow station
wagon. In fact, it might not even be a first step. Does the content of visual
experience concern station wagons, hawks, and the like, as such? If the ostensible
yellow station wagon is actually white, vision is surely to blame for delivering
misinformation to the subject. But what if the ostensible station wagon is a sedan?
Here there is a temptation to exonerate vision, and instead to point the finger at
the subject’s judgment that the car is a station wagon. The issue is less than clear,
and in any event disputed.11
Granted that visual experiences have contents, it is not disputed that the
content at least concerns what falls under the rubric of “mid-level vision” in
vision science: shape, orientation, depth, color, shading, texture, movement, and
so forth: call these sensible qualities. It will be convenient to take one side in the
dispute just mentioned, and assume that v-propositions just concern sensible
qualities; vision, strictly speaking, never delivers the information that this is a
station wagon. With this assumption, and letting ‘[ . . . x . . . ]V’ express a v-proposition
that is true at a world w only if x has certain sensible qualities in w (i.e., if x is red,
or square, . . . ), we get:
SEEiv If you V[ . . . x . . . ]V and x is an F, believe that you see an F.
Notice that because ‘F’ does not appear in the scope of ‘V,’ this is an improvement
on SEEiii. Return to an example given in section 6.1: I see a bird atop a tall tree, too
far away to make out its avian nature, which Ryle tells me is a hawk. I am
presumably not having an experience “of a hawk,” since the information available
to my visual system is too impoverished. I therefore cannot follow SEEiii. But I can
follow SEEiv, since I am having a visual experience with a content that concerns
the hawk (roughly: that x is brown, located up and to the left, . . . ), albeit a content
that does not identify it as such.
Although SEEiv is the best attempt so far, it is not good enough. Recall that
‘[ . . . x . . . ]V’ expresses an object-dependent proposition—one whose truth at
world w depends on how a certain object (namely x) is in w. Further, it is very
plausible that one can only enjoy a visual experience with such an object-
dependent content in a world in which the object exists (at some time or another).

11
See Siegel and Byrne 2016. Siegel defends the “rich view,” on which the content of visual
experience concerns station wagons and hawks as such; Byrne defends the opposing “thin view.”
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

That is, ‘You V[ . . . x . . . ]V’ entails ‘x exists.’ But if that’s right, then we are back in
the same bind that afflicted the suggestion that I can know that I see a hawk non-
observationally (see section 6.2.3). Since the existence of x is entailed by the
proposition that I V[ . . . x . . . ]V, I have two routes to the conclusion that that
object (the hawk) exists. And, as before, the account leaves open a possibility that
should be closed, namely that there are two objects, one known about through
vision, and the other known about through non-observational means.
Can these problems be avoided by denying that v-propositions are object-
dependent? The view is not well motivated. By perceiving, in particular by seeing,
one may come to know things about individual objects in one’s environment—
that that is a hawk, for example. It is thus natural to think that the information
delivered by vision is object-dependent: the testimony of one’s visual system
concerns this very hawk. Still, this alternative needs examining further.
Suppose, then, that when I see the hawk, it is not pinned down by a
v-proposition with the hawk as a constituent, but rather by a proposition that
identifies the hawk by description. (For the sake of the argument, we can ignore
the difficult question of what this description exactly is.) Here is the descriptive
counterpart of SEEiv:

SEEv If you V[ . . . (the G) . . . ]V and the G is an F, believe that you see an F.


Apart from paucity of motivation, is there anything wrong with it?
Consider a case where I think or suspect that I am suffering from an illusion.
I know that I see a hawk, but I doubt that the hawk is the way it looks. Perhaps the
hawk looks like a penguin right in front of me, and I have reason to believe that
this is the product of a devious arrangement of distorting mirrors, with the
ordinary-looking hawk being positioned behind my back. However ‘the G’ is
filled in, we may safely suppose that I do not know or believe that the hawk is the
G. SEEv is thus of no help. Nonetheless, I may know that I see a hawk in a perfectly
ordinary way: perhaps Ryle told me that this (clearly referring to the penguin-
lookalike before me) is a hawk. Since there is nothing epistemologically special
about this case, if SEEv does not explain my knowledge here, it does not explain it
elsewhere.
Even taking the ontology and epistemology of “visual experiences” for granted,
there are no easy alternatives to the transparency account. So let us revisit it.

6.2.6 Back to transparency: SEE


For the moment, shelve illusions and concentrate on veridical cases, where one
sees an object and it is as it looks. Return to the object-dependent suggestion:

SEEiv If you V[ . . . x . . . ]V and x is an F, believe that you see an F.


OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. PERCEPTION 

The problems just rehearsed are all in effect traceable to the ‘V,’ which suggests
the experiment of dropping it. And removing the ‘V’ yields a version of the
transparency account:
SEE If [ . . . x . . . ]V and x is an F, believe that you see an F.
The amodal problem of section 6.2.1 seemed to doom the transparency account.
Could v-facts rescue it?
Recall that a v-fact concerns the sensible qualities of objects in the scene before
the eyes. In one way this notion is perfectly familiar. When I see the hawk on the
fence post, a segment of the visible world is revealed: an array of colored,
textured, three-dimensional objects, casting shadows, some occluding others, at
varying distances from my body, with various illumination gradients, and so
forth. A certain v-fact just specifies that array, the scene before my eyes. If Ryle
and I strike up a conversation about the spectacular view of the North York
Moors, v-facts are a significant part of our topic.
On the other hand, giving a theoretically satisfying characterization of v-facts
is difficult. Armstrong, for instance, speaks of perceptual content as comprising
“certain very complex and idiosyncratic patterns of information about the
current state of the world” (1968: 212), while declining to be much more
specific.12 Even vision science often in effect dodges the issue with placeholders
like ‘visual representation.’ Complexity or informational richness is no doubt
part of the story, but even in the case of viewing a very simple scene—say, a red
spot against a gray background—it is unclear how to proceed. Just concentrat-
ing on one feature of the spot, its hue, the predicate ‘is red’ (or even some made-
up predicate like ‘is red29’) does not quite do it justice. The particular red hue of
the spot might be a little yellowish, or alternatively a little bluish; how exactly
information about the hue is packaged by vision is not at all obvious.13 Even
though the familiar may resist theory, fortunately for our purposes not much
theory is required.
Vision, we may say, reveals (part of) the visual world: the totality of v-facts.14
In the visual world objects are colored, illuminated, moving, and so on; it is left
open whether these visibilia are also smelly, noisy, expensive, famous, or angry.
Likewise, olfaction reveals (part of) the olfactory world: the totality of o-facts. The

12
A rare example of a more detailed account is in Peacocke 1992.
13
For a sketchy proposal about the visual representation of hue, see Byrne and Hilbert 2003: 14.
14
A qualification is needed, although one that can be ignored here: cross-modal effects show
vision does not reveal the visual world unaided—other modalities sometimes help too. There are
further more subtle qualifications due to multimodal perception, which can also be passed over: see
Matthen 2016; O’Callaghan 2016.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

olfactory world—at least, our olfactory world—is a relatively impoverished place,


consisting of odors or “vaporous emanations” (Lycan 1996: 146). The auditory
world, the world of a-facts, is somewhat richer, consisting of sounds of varying
loudness and pitch at different locations (see, e.g., O’Callaghan 2007).
One may base one’s actions and inferences on how things are in the visual
world—this just requires a sensitivity to different aspects of one’s environment.
(In particular, it does not presuppose self-knowledge.) Suppose one investigates
one’s environment, and finds that a certain v-fact, the fact that [ . . . x . . . ]V,
obtains. Vision is, at least in creatures like ourselves, an exclusive conduit for
v-facts. Hence one’s information source must be vision, not audition, olfaction,
testimony, or anything else. Although information is amodal in principle, for us
v-facts do indicate their provenance—(visual) information is practically modal.
Thus SEE apparently solves the amodal problem.
What about the puzzle of transparency? That has not gone away, because the
fact that [ . . . x . . . ]V remains stubbornly devoid of vision. That is, the hawk before
my eyes, with its rich variety of visual sensible qualities, offers no indication at all
that it is seen.
Still, SEE takes the sting out of the puzzle of transparency much as KNOW did.
Recall that the latter rule is:
KNOW If p, believe that you know that p.
Section 5.5.1 noted that KNOW is self-verifying: if one follows it, then one’s belief
that one knows that p is true. KNOW also produces safe beliefs, and so knowledge.
The puzzle of transparency thus yields to a straight solution.
SEE, in contrast, is not self-verifying: perhaps one could in principle learn that
[ . . . .x . . . ]V by reading it in the—as-yet-unwritten—language of vision; one
would not thereby see x. But it is practically self-verifying: in all ordinary
situations, one knows that [ . . . x . . . ]V only if one sees x. And as far as responding
to the puzzle of transparency goes, practical self-verification will suffice.
6.2.7 The memory objection
The claim that SEE is practically self-verifying might be thought to be too
strong. Surely, if v-facts can be known, they can be remembered. Shouldn’t we
then have said: in all ordinary situations, one knows that [ . . . x . . . ]V only if
one sees or saw x? And if so, there is the following difficulty.
Suppose I see a red spot (x) at time t1. Write the relevant v-fact as ‘the fact that
[ . . . Red(x, t1) . . . ]V,’ and further suppose that I remember it. Shortly after, at t2, a
piece of cardboard is placed in front of the spot, completely occluding it; I am
quite confident that the spot itself has not changed color: the distinctive visual
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. PERCEPTION 

way the spot was is the way the spot now is. I know (we may assume) that
[ . . . Red(x, t2) . . . ]V. Granted all this, I am in a position to follow SEE and
conclude that I see a red spot. But obviously I don’t. Why not? Either something
blocks in the inference in this case, or I don’t follow SEE in any circumstances.
Once this disjunction is conceded, it is hard to avoid the second disjunct.15 But
the disjunction can easily be resisted. Although our visual memories are very
impressive, at least under some conditions, a lot of perceptual information is
lost.16 Moreover, a reasonable conjecture is that what is remembered is not a
fragment of the original perceptual information, as when a book is abridged by
omitting certain sentences and paragraphs. Storing something in memory is
more analogous to writing a précis or synopsis than to cutting and pasting. The
information in a synopsis may be easily told apart from the information in an
excerpt—the synopsis might replace specific terms with general ones, or the first
person with the third, or the dates of events with their order in time. An accurate
synopsis is not just a degraded (i.e. logically weaker) version of the original; it may
also substitute logical equivalents, for example replacing ‘The cat walked in an
ellipse with eccentricity zero’ with ‘The cat walked in a circle,’ thus (as we can put
it) transforming the original. We may suppose, then, that perceptual information
stored in memory is a degraded and (perhaps) transformed version of the
original.17
Assuming that the degradation and transformation of visual information in
memory makes the remembered facts disjoint from v-facts, what one remembers
when one sees the red spot is neither the fact that [ . . . Red(x, t1) . . . ]V nor any
other v-fact. It is information that is closely related to the fact that [ . . . Red(x,
t1) . . . ]V, but is not itself a v-fact. Hence what I know when I look at the
cardboard occluding the spot does not put me in a position to follow SEE and
conclude that I see a red spot. The memory objection thus fails.18

15
Could the fact that the cardboard “occludes” the spot block the inference? No. If ‘occludes the
spot’ means ‘prevents me from seeing the spot,’ this just raises the question how I know the
cardboard occludes the spot. On the other hand, if it means ‘is opaque and in front of the spot,’
then my knowing this fact does not explain why I do not follow SEE. Suppose I can in fact see the
spot, due to some devious arrangement of mirrors, or because I have suddenly gained Superman’s
ability to see through walls. Despite knowing that the cardboard is opaque and in front of the spot,
I would follow SEE and conclude that I see it.
16
See, e.g., Brady et al. 2013.
17
Episodic autobiographical memory often involves a change from one’s own point of view (the
“field perspective”) to that of an external observer (the “observer perspective”) (see, e.g., Sutin and
Robins 2008). If we (improbably) suppose that this introduces no inaccuracy, this is arguably a kind
of transformation of the original perceptual information, effected by switching reference frames.
18
Memory is taken up again at greater length in Chapter 8.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

6.2.8 Evans again, and the known-illusion problem


So far we have concentrated on the veridical case: I see the hawk and it is as it
looks. Let us now return to illusions. To give some examples more realistic than
the one mentioned at the end of section 6.2.5: the hawk looks closer than it really
is, or a shadow appears as a patch of darkened green on the field beyond, or the
hawk is perching on a wall that generates Richard Gregory’s “café wall illusion.”19
In such cases, the fact I seem to apprise, that [ . . . x . . . ]V, where x=the hawk, is no
fact at all. Still, I can easily discover that I see a hawk, just as I did in the original
veridical example.
If I do not know that I am illuded, this case presents no difficulty. Recall
that one tries to follow the rule ‘If conditions C obtain, believe that p’ iff one
believes that p because one believes that conditions C obtain. If one follows a
rule, one tries to follow it, but not conversely (section 5.2.5). One cannot follow
SEE if the relevant v-proposition is false, but one can try to follow it. And in the
illusory example of the previous paragraph, if I try to follow SEE, then I will likely
end up with a safe belief that I see a hawk, for essentially the same reasons
as before.
The problem, rather, is similar to the one faced by SEEv at the end of section 6.2.5,
and concerns the case when I know (or believe) that I am illuded. The method I use
to discover what I see does not obviously alter when I know (or believe) that the
hawk isn’t the way it looks: I can still know that I see it by attending to the hawk. If
the transparency procedure applies at all, it surely applies unmodified across the
board. But if I don’t believe the relevant v-propositions, I cannot even try to follow
SEE. Hence cases of known-illusion threaten to blow the present proposal entirely
out of the water.
6.2.9 Evans’ proposal
Recall the quotation from section 6.2, where Evans is explaining how someone
may gain knowledge of how things perceptually appear by “re-using precisely
those skills of conceptualization that he uses to make judgements about the
world.” The quoted passage contained an elision, and it is time to restore it.
Here are the crucial sentences:
[The subject] goes through exactly the same procedure as he would go through if he were
trying to make a judgement about how it is at this place now, but excluding any
knowledge he has of an extraneous kind. (That is, he seeks to determine what he would
judge if he did not have such extraneous information.) (1982: 227–8)

19
See http://en.wikipedia.org/wiki/Café_wall_illusion.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. PERCEPTION 

Consider the following case. I am staring at what I know to be a gray patch on a


green background. Because of a color-contrast effect, the patch will look slightly
reddish. Since I am aware of the effect, I do not believe the relevant v-proposition,
that [ . . . Reddish(x) . . . ]V, where x=the patch. I know that I see a gray patch, but
cannot know this by trying to follow SEE.
Evans’ remarks suggest the following two-step alternative. First, I verify a
certain counterfactual truth: if I had not known extraneous facts, I would have
judged that [ . . . Reddish(x) . . . ]V. That tells me that x looks reddish, and so
that I see x. I then add in the fact that x is a gray patch, and conclude from this
that I see a gray patch.
One immediate problem with this suggestion turns on the notion of “know-
ledge of an extraneous kind.”20 The effect of excluding extraneous knowledge is
intended to make me rely exclusively on the testimony of vision, but it cannot be
characterized as “facts I know other than by current vision” on pain of circularity.
Could an extraneous piece of knowledge be characterized simply as something
that I previously knew about the patch? Then the counterfactual to be verified is
‘If I hadn’t known anything about the patch beforehand, I would have judged that
[ . . . Reddish(x) . . . ]V.’ This suggestion has a number of problems. First, it is quite
implausible that a counterfactual of this sort will always be true in every case, or
that I will judge that such a counterfactual is true.21 Second, intuitively it gets
things back to front. If I do know that the counterfactual is true, then isn’t this
because I know the patch looks reddish? Finally, in bringing in sophisticated
counterfactual judgments about my own mind, the attractive idea that I can know
that I see a hawk merely by attending to the hawk has been thrown overboard.
6.2.10 Belief-independence
The known-illusion problem is entirely generated by the widespread assumption
that, as Evans puts it, there is
a fundamental (almost defining) property of states of the informational system,22 which
I shall call their ‘belief-independence’: the subject’s being in an informational state is

20
More generally, it should be belief, not just knowledge.
21
For example, suppose one has a known-illusion of motion by viewing Kitaoka’s “rotating
snakes” figure (https://en.wikipedia.org/wiki/Peripheral_drift_illusion). Assume, with Evans, that
one does not believe that anything in the figure is moving. If one hadn’t known anything about the
figure beforehand, would one have judged that anything in the figure were moving? That depends.
The figure and the motion both look so unusual that a sensible person might well smell a rat.
(Cf. Jackson 1977: 40–1.)
22
Which subserves “perception, communication, and memory” and “constitutes the substratum
of our cognitive lives” (122).
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

independent of whether or not he believes that the state is veridical. It is a well-known


fact about perceptual illusions that it will continue to appear to us as though, say, one line
is longer than the other (in the Müller-Lyer illusion), even though we are quite sure that
it is not. (123)23

Put in the present notation: even though one’s visual system may (mis-)inform
one that [ . . . x . . . ]V, one may nonetheless resist its testimony and not believe this
v-proposition.
But is it true that perception is belief-independent? Evans’ correct observation
about the Müller-Lyer illusion does not immediately establish this conclusion.
He notes that it may appear to one that the lines are unequal even though one
believes they are equal. For belief-independence to follow, it must also be
assumed that if one believes that the lines are equal, one does not also believe
that they are unequal. And since having contradictory beliefs is a familiar
phenomenon, this assumption needs to be backed up with an argument.
Let us call the view that vision constitutively involves belief in the relevant
v-proposition belief-dependence. (Belief-dependence is, more or less, the “judge-
mental theory of perception” defended in Craig 1976.) Belief-dependence is not,
it should be emphasized, the view that to enjoy visual appearances is simply to
have beliefs of a certain sort. (For a reductive theory along these lines, see
Armstrong 1961: ch. 9.) Neither is it the view that perception can be analyzed
partly in terms of belief. In these respects, belief-dependence is analogous to the
view that knowledge constitutively involves belief: that does not imply that
knowledge is belief, or that it can be analyzed partly in terms of belief. Although
the passage from Evans does not conclusively establish that belief-dependence is
false, it might be thought that the idea that one has contradictory beliefs in cases
of known-illusion is implausible. So can anything positive be said in favor of
belief-dependence?
Here are four considerations.24
First, perception is clearly belief-like—which is why Armstrong-style attempts
to reduce perception to belief were certainly worth trying. Perception compels
belief: the visual appearance of unequal lines is accompanied by the belief that the

23
There is a slight infelicity in this passage. Evans has ‘believes that the state is veridical’ where it
would have been better to write ‘believes that p,’ where the proposition that p is (on his view) the
“conceptualized” version of the content of the experience.
24
A fifth consideration is that only belief-dependence can explain why experience is epistemic-
ally significant (Byrne 2016); to examine this here would take us too far afield. It is worth noting that
belief-dependence implies that the perception has “conceptual content,” at least on one way of
understanding that thesis. (See, e.g., Van Cleve 2012.)
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. PERCEPTION 

lines are unequal, absent (apparent) evidence to the contrary. As A. D. Smith puts
it, “How can I disbelieve my senses if I have nothing else to go on?” (2001: 289).
And perception has the same “direct of fit” as belief: false beliefs and illusory
perceptions are mental states that are both failures, in some (admittedly some-
what obscure) sense. Belief-dependence explains both these features. Compulsion
is explained simply because the visual appearance of unequal lines is always
accompanied by the belief that the lines are unequal. Sometimes that belief will
not be manifest because it is suppressed by the contrary belief that the lines are
equal; remove that contrary belief, and one will have an unsuppressed belief that
the lines are unequal, that will manifest itself in the usual way. And direction of
fit is explained because the failure of a constitutive component of a perceptual
state presumably implies the failure (or less than complete success) of the state
as a whole.
Second, belief-dependence dovetails nicely with the evolution of perceptual
systems, at least if one is unmoved by the reluctance some philosophers have felt
to attribute beliefs to other animals. There are non-human perceivers (perhaps
other primates) who, it is safe to assume, cannot resist the testimony of their
senses; for them, seeing is always believing. These animals betray no hint that
belief is causally downstream from experience—accordingly, belief-dependence
is a perfect fit for them. But once belief-dependence is conceded in this case, given
the way evolution works, one would expect that cognitively sophisticated crea-
tures like ourselves have simply developed the ability to inhibit the beliefs which
are constitutive of perceptual experience. We haven’t evolved a new kind of
perceptual experience that does not constitutively involve belief; we have just
overlain an inhibitory mechanism on the old kind.
Third, consider the quite remarkable fact that numerous long-and-not-so-
long-dead philosophers claimed to believe the deliverances of vision in cases of
illusion. “When I see a tomato,” H. H. Price famously declared, “there is much
I can doubt. I can doubt whether it is a tomato that I am seeing, and not a cleverly
painted piece of wax. I can doubt whether there is any material thing there at
all. Perhaps what I took for a tomato was really a reflection, perhaps I am even
the victim of some hallucination. One thing however I cannot doubt; that
there exists a red patch of a round and somewhat bulgy shape, standing out
from a background of other color patches, and having a certain visual depth”
(1932: 3). On the orthodox view, when the plain man discovers that he is subject
to a devious color-illusion, his former belief that there is a bulgy red patch before
him vanishes. So why, on the orthodox view, do careful students of appearances
like Price insist that they have not changed their minds? Are they insane?
If belief-dependence is right, they are simply honest.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

Fourth, belief-dependence need not, unlike the orthodox view, involve any
novel psychological mechanisms. There is no need for a sui generis process of
“taking experience at face value” (Peacocke 1983: 39), which may occasionally be
inhibited: experience comes with “taking at face value” built in. Admittedly,
belief-dependence requires a way of inhibiting one of a pair of contradictory
beliefs, but this can be modeled along the lines of delusory beliefs, which are to a
significant extent inferentially isolated, and persist despite evidence to the con-
trary. For instance, someone suffering from the Capgras delusion may believe
that their spouse has been replaced by an imposter. But that belief will typically be
largely disconnected from the rest of the subject’s world view: “Most Capgras
patients do not show much concern about what has happened to the real
relatives; they do not search for them, or report their disappearance to the police”
(Stone and Young 1997: 333).
Assuming belief-dependence is correct, one believes the relevant v-proposition
in a case of known illusion, and so one is in a position to (try to) follow SEE.
Therefore the known-illusion problem does not arise.25
If an ideal of rationality is avoidance of inconsistency, then belief-dependence
implies that someone who suffers a perceptual illusion thereby falls short of the
rational ideal. (As Craig 1976: 15–16 points out; see also Glüer 2009: 303, n. 10.)
Can this be turned into a convincing objection?
No. It will not do simply to claim that the illuded subject is not, or need
not be, irrational. Taken as a claim about a rational ideal, its truth is not
evident. Taken as an ordinary sort of remark, on the other hand, it is true but
not in conflict with belief-dependence. The belief that the subject knows to be
false (e.g., a certain v-proposition that is true only if the lines are unequal)
does not influence her verbal reports about the lengths of the lines, or any
plans for action based on the lengths of the lines. She is not therefore
“irrational” in the practical sense of an ordinary accusation of irrationality.
The subject’s belief that the lines are unequal does little harm—at worst, it
would make her a sense-datum theorist. Indeed, if the transparency account is
correct, it actually does some good, by allowing the illuded subject to know
what she sees.26

25
Further support for belief-dependence can be found in Craig 1976 and Smith 2001. For a
defense of the view that delusory “beliefs” are the genuine article, see Stone and Young 1997: 351–9,
Bayne and Pacherie 2005, Bortolotti 2010.
26
Glüer, who thinks that belief-dependence founders on this sort of consideration, asserts that
“there is nothing ‘irrational’ about the lines looking of different length” (2009: 303, n. 9). But she
does not explain why this is true on the required reading of ‘irrational.’
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. SENSATION 

6.3 Sensation
Assuming that sensation is a special case of perception, the epistemology of one’s
sensations is a special case of the epistemology of one’s perceptions. As might be
expected, there are some interesting complications. But before getting to them,
the case for the perceptual theory of sensation needs rehearsing. As with (para-
digmatic) perception, a particular example will help focus the discussion. Let us
take the philosopher’s favorite: pain.

6.3.1 Pain perception


When one feels a pain in one’s foot, one is aware (or, more cautiously, seems to be
aware) of a distinctive kind of event or happening in one’s foot—the object of one’s
awareness lasts for a period of time and, like events such as thunderstorms and forest
fires, may wax and wane in intensity and unpleasantness. We may appropriately label
it a painful disturbance, or disturbance for short. One may attend to disturbances, as
one may attend, via audition and vision, to claps of thunder and flashes of lightning. It
is thus natural to initially classify one’s awareness of disturbances as a kind of
perceptual awareness, and according to the perceptual theory of pain, it is. (Problem-
atic cases like phantom limb pain will be set aside until the end of the following
section.) Pain perception is interoception—perception specialized for delivering
information about one’s own body, like proprioception and the vestibular sense.
Perception involves dedicated mechanisms of sensory transduction that result
in the delivery of ecologically useful information about the perceiver’s environ-
ment. The extra-bodily environment is not the only part of the environment of
interest—the state of the perceiver herself is also important. A perceiver can turn
her external senses toward herself: one can check the position of one’s feet by
looking, or palpate a lump on one’s head. Other more direct methods are useful,
and so it is no surprise that there are specialized mechanisms for this purpose:
proprioception informs us about the position of our limbs, and the vestibular
sense informs us about our balance. Such information about our bodily states is
crucial for successful action, of course.
Similarly, it’s no surprise that a car has systems designed to detect its internal
states (the gas gauge and tachometer, for example)—they are useful for much the
same reason. Further, the gas gauge (measuring the internal environment) and
the outside air temperature sensor (measuring the external environment) deliver
their proprietary messages in the same calm unobtrusive manner, not making
much of a song and dance about it.
Sometimes problematic conditions arise. Some demand immediate action—
black ice, loss of oil pressure. Others require one to drive carefully until the
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

problem can be fixed—a near-empty tank, a broken exterior light. Cars have
systems to detect these conditions as well, and it’s no surprise that here the
message delivery is more strident and eye-catching—the dashboard warning
light. (There is even the maximally vague “check engine” light, apt to induce a
disturbing sense of unease. Something is wrong, but what?)
Pursuing the parallel, one might expect organisms to have their own versions
of dashboard warning lights, and the obvious candidates are pains. And since a
dashboard warning light is not just a pretty glow, but a messenger, like the
speedometer, these armchair biological reflections motivate a perceptual view
of pain. As the English neurophysiologist Charles Sherrington put it in his classic
The Integrative Action of the Nervous System:

With its liability to various kinds of mechanical and other damage in a world beset by
dangers amid which the individual and species have to win their way in the struggle for
existence we may regard nocuous stimuli as part of a normal state of existence. It does not
seem improbable, therefore, that there should under selective adaptation attach to the skin
a so-to-say specific sense of its own injuries. As psychical adjunct to the reactions of that
apparatus we find a strong displeasurable affective quality in the sensations they evoke.
(1906: 227–8)

If the perceptual theory of pain is right, then there should be dedicated receptors
for transducing noxious stimuli into electrical energy and conveying the signal to
the brain, just as there are dedicated mechanisms for transducing light and
sound, for example. And it was Sherrington who inferred their existence on the
basis of experiments with “spinal” (decerebrate) animals, described in The
Integrative Action of the Nervous System, showing that specific noxious stimuli
produced certain reflexes; he called them nociceptors. Direct confirmation of his
prescience had to wait until the 1960s, when new recording techniques demon-
strated how different nociceptors responded to different stimuli. (For the history,
see Perl 1996.) Some nociceptors are sensitive to mechanical stimuli, others to
heat or cold, and yet others to chemical stimuli. Their axons come in two
varieties: fast-conducting myelinated Aδ axons and the slower-conducting un-
myelinated “C-fibers” of philosophical lore. Corresponding to these two kinds of
axons are sharp pains felt immediately after injury (via the Aδ axons), and the
longer-lasting pain felt subsequently (via C-fibers).
A final consideration: on one central use, ‘feel’ is a straightforward perceptual
verb, as in ‘I feel a stone in my shoe.’27 ‘I feel a pain in my foot,’ on the face of it, is
a similar report. The stone is the object of one’s perception when one feels a stone

27
For a discussion of the semantics of ‘feel,’ see Brogaard 2012.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. SENSATION 

in one’s shoe, so presumably the pain in one’s foot is the object of one’s
perception when one feels a pain in one’s foot. (Consistently with this, of course,
stones and pains may be utterly different sorts of things.)
In short, phenomenology, armchair biology, neuroscience, and semantics
together produce a perfect storm of agreement, revolving around the perceptual
theory of pain.28
6.3.2 PAIN and the world of pain
Section 6.2.6 introduced the visual, auditory, gustatory worlds—the world as
ostensibly revealed by vision, audition, or gustation. Given the perceptual theory
of pain, to this we may add the world as revealed by nociception, the world of
pain.
The world of pain is the totality of p-facts, concerning qualities of painful
disturbances occurring in the bodies of animals. A p-fact is a true p-proposition,
which we can write as ‘[ . . . x . . . ]P,’ where x is a disturbance. Like the visual world,
this notion is in one way perfectly familiar, although giving a theoretical charac-
terization of it is difficult. When I feel a pain in my foot, the curtain is raised on a
segment of the world of pain: a pulsating unpleasant occurrence in my foot. The
fact that [ . . . x . . . ]P specifies that segment, the scene before my nociceptors. If
Ryle and I strike up a conversation about our gout, p-facts are a significant part of
our topic.
If SEE is the right rule for seeing, the right rule for pain must be this:
PAIN If [ . . . x . . . ]P, believe that you feel a pain.
And, for the case of feeling pain in a particular body part:
PAIN-FOOT If [ . . . x . . . ]P, and x is in your foot, believe that you feel a pain in
your foot.
Just like vision, there is an amodal problem for pain, albeit less obvious. To
dramatize it, imagine that you are one of a pair of conjoined twins. You share a
foot, although in fact only your twin feels pain in it. You see a hammer drop on
the foot, and thereby come to know that there is a pain in the foot—moreover,
your foot. However, you would not be at all inclined to conclude that you feel a
pain in your foot. That is:
PAIN-FOOT{ If a pain is in your foot, believe that you feel a pain in your foot

28
Perceptual theorists include Armstrong (1962), Pitcher (1970), Hill (2005), Tye (2005a), and
Bain (2007). The dissenters should be acknowledged, notably Aydede (2009).
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

is not the rule that explains how you know that you feel a pain in your foot. In the
case of vision, the key to the solution was the visual world. The case of pain is
parallel.
One may base one’s actions and inferences on how things are in the world of
pain. This just requires a sensitivity to different aspects of one’s (internal)
environment, and does not presuppose self-knowledge. Suppose one investigates
one’s internal environment (in particular, one’s foot), and finds that a certain
p-fact, the fact that [ . . . x . . . ]P, obtains. Nociception is, at least in creatures like
ourselves, an exclusive conduit for p-facts. Hence one’s information source must
be nociception, not vision, proprioception, testimony, or anything else. Although
information is amodal in principle, for us p-facts do indicate their provenance—
(nociceptual) information is practically modal. Thus PAIN-FOOT solves the amo-
dal problem.
What about the puzzle of transparency? A sensible perceptual theory of
anything must deny that esse est percipi: what is perceived can exist unperceived.
So the puzzle of transparency has not gone away, because the fact that [ . . . x . . . ]p
remains stubbornly devoid of nociception. That is, the pain in my foot, in its
pulsating unpleasantness, offers no indication at all that it is felt.
Still, the sting is drawn from the puzzle in (what is now) a familiar manner.
PAIN (or PAIN-FOOT) is not self-verifying: perhaps one could in principle learn
that [ . . . x . . . ]P by reading it in the—as-yet-unwritten—language of nociception;
one would not thereby feel x (a pain). But it is practically self-verifying: in
all ordinary situations, one knows that [ . . . x . . . ]P only if one feels x. And as
far as responding to the puzzle of transparency goes, practical self-verification
will suffice.
The first-person epistemologies of seeing a hawk and feeling a pain are
fundamentally alike, then. Unfortunately, this result, far from being welcome,
seems absurd. Suppose I feel pain in my phantom foot, and it is so excruciating
that I also visually hallucinate a hawk on a nearby fence post. If the relevant visual
content is the v-proposition that [ . . . x . . . ]V, there is either no such thing as x or
else it is not a hawk.29 There is thus no hawk for me to see, and a fortiori I cannot
know that I see a hawk by following SEE or by any other method. So far, so good,
but my phantom pain is surely quite different. If the relevant nociceptive content
is the p-proposition that [ . . . x . . . ]P, there is either no such thing as x or else it is

29
Exactly what proponents of the “content view” (see section 6.2.5) should say about hallucin-
ation is a vexed matter, which we fortunately need not pursue here. For different answers, see, e.g.,
Schellenberg 2010, Tye 2014. Two prominent non-content accounts of hallucination are in Johnston
2004 and Martin 2006.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. SENSATION 

not a disturbance. There is thus no pain (disturbance) for me to feel, and a


fortiori I cannot know that I feel a pain by following PAIN or by any other method.
But I do feel a pain, and know that I do!
The difficulty is this. The transparency approach to the epistemology of pain
needs the perceptual theory of pain. But the perceptual theory appears to be
incompatible with the evident impossibility of pain hallucinations—phantom
limb pain is pain! Either the incompatibility is only apparent, or else the
perceptual theory must be rejected, with the transparency approach joining it
in the dustbin.30
6.3.3 Perceptual theorists on the objects of pain perception
In fact, not only is the perceptual theory compatible with the impossibility of pain
hallucinations, but making it so was a clear motivation of the early perceptual
theorists, Pitcher and Armstrong. Looking back at section 6.3.1, the perceptual
theory was stated in terms of disturbances—those are the objects of pain percep-
tion, according to the perceptual theory of pain. Naively, one might suppose that
disturbances are nothing other than pains, and indeed this was supposed in
section 6.3.2. Hence our difficulty: once the perceptual theory is accepted,
hallucinations of disturbances must be accepted too. But Armstrong and Pitcher
rejected the identification of disturbances with pains.
Here, for example, is the opening sentence of Pitcher’s early article defending
the perceptual theory:

I shall defend the general thesis that to feel, or to have, a pain, is to engage in a form of
sense perception, that when a person has a pain, he is perceiving something.
(1970: 368, emphasis added)

That “something,” it turns out, is not a pain; in our terminology, Pitcher thinks
disturbances are not pains.31 Pitcher’s article is called ‘Pain perception,’ but
Pitcher denies that this eponymous perception is the perception of pain.
Pitcher need have no quarrel with the motivation for the perceptual theory
given in section 6.3.1, except for the last semantic part. ‘Feeling a pain,’ he thinks,
is grammatically misleading: it is analogous to ‘catching a glimpse’ (374–7).
Catching a glimpse is not like catching a fish. To catch a glimpse of X is to briefly
see X; it is not to stand in the catching relation to a peculiar entity, a glimpse.
Thus the “real unit of analysis” is the act of catching a glimpse, not a glimpse

30
Recall Bar-On’s objection in section 4.5.
31
In Pitcher’s terminology, the object of pain perception is a “disordered bodily state” (372,
emphasis added), but event-talk is more phenomenologically apropos.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

itself. Similarly with pains: the real unit of analysis is not a pain but “the act
(or state) of feeling a pain” (379).
Why are pains not disturbances? One can hallucinate a pink rat in the
complete absence of pink rats but, Pitcher says, one cannot hallucinate pain in
the complete absence of pain. On any credible version of the perceptual view, at
least some cases of phantom limb pain are hallucinations—the person seems to
feel a disturbance in his foot, but there is no such foot and no such disturbance.
Yet “we do not say he does not really feel any pain at all” (382). Pains, then,
cannot be disturbances, the objects of pain perception.
So what are pains? Pitcher is unclear on this point. His claim that “[n]either
pains nor glimpses are to be simply identified with anything at all” (378) suggests
that really there are no such things as pains (or glimpses)—any appearance to
the contrary is an artifact of grammar. On the other hand, he says “pains exist
when, and only when, they are being felt (had)” (and, similarly, “glimpses exist
when, and only when, they are being caught”) (378). Since Pitcher certainly
holds that people feel pains, he is committed to the existence of pains; likewise
for glimpses.
Here Pitcher’s glimpse analogy is helpful. ‘Catching a glimpse of X’ has a near-
equivalent with ‘glimpse’ used as a verb, namely ‘glimpsing X.’ Sometimes a
dynamic verb ‘to F’ has a corresponding noun ‘F’ that refers to the act of F-ing:
‘sneeze’ refers to the act of sneezing, ‘poke’ to the act of poking, ‘kiss’ to the act of
kissing. A sneeze is not some entity that is merely contingently connected with
sneezing, as a bump (interpreted as protuberance) is contingently connected
with bumping. (Bumps on the head are often caused by bumping, but they
sometimes aren’t.) And ‘glimpse’ is a natural candidate for this sort of treatment:
a glimpse is simply an act of glimpsing. And if so, glimpses are entities in
perfectly good order, or at least no more dubious than sneezes and kisses. On
this view, ‘S catches a glimpse of X’ is an idiomatic way of saying that an act of
glimpsing X by S occurred. So, just as Pitcher says, glimpses exist (i.e. occur)
when, and only when, they are being caught, but this is not some interesting
metaphysical discovery about a strange class of entities. It simply amounts to
saying that a glimpse, a.k.a. an act of glimpsing, exists iff someone is glimpsing
something.
If ‘pain’ is in this respect like ‘glimpse,’ then ‘pain’ should refer to an act of
some sort, and it must be “the act . . . of feeling a pain.” So, just as Pitcher says,
pains exist when, and only when, they are being felt, but this is not some
interesting metaphysical discovery about a strange class of entities. It simply
amounts to saying that a pain, a.k.a. an act of feeling a pain, exists iff someone is
feeling a pain.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. SENSATION 

But what is it to “feel a pain”? Someone who feels a pain in her phantom hand
is not feeling a disturbance. She does, however, seem to feel a disturbance, or feel
as if there is one. Seeming to feel a disturbance is also present in a case of ordinary
pain in the hand, where one really does feel a disturbance. To feel a pain, then, is
to seem to feel a disturbance, to be in the kind of perceptual state that covers both
veridical perception and hallucinations of disturbances.
Cast in the favored contemporary terminology of ‘experiences,’ ‘pain’ refers
not to the objects of pain-experiences, but to pain-experiences themselves.
Although Pitcher doesn’t flatly endorse this view, it is very close to the surface of
his paper. Armstrong, another pioneer of the perceptual theory, is more explicit.
As he puts it in Bodily Sensations, pains (and sensations in general) are “sense-
impressions of our bodily state” (1962: 102).32
If Armstrong and Pitcher are correct, then the perceptual theory is compatible
with “phantom limb pain” being the genuine article. Admittedly, the phantom
limb sufferer is not perceiving a disturbance, but she really is feeling a pain. This
is because to “feel a pain” is not to stand in a perceptual relation to a pain—it is to
seem to stand in a perceptual relation to a disturbance, or to enjoy a painful
experience or sense-impression.
Return now to our difficulty. I feel a pain in my phantom foot, yet there is no
relevant disturbance in my body. The content of my nociceptive experience, the
p-proposition that [ . . . x . . . ]P, is not true, and there is either no such thing as x or
else it is not a disturbance. Still, I feel a pain and may come to know that I do by
inferring this from the premise that [ . . . x . . . ]P; that is, by trying to follow PAIN.
Given that ‘I feel a pain’ is true just in case I seem to (nociceptively) perceive a
disturbance, and that believing that [ . . . x . . . ]P practically guarantees that I seem
to perceive a disturbance, my conclusion that I feel a pain will invariably be true
and very likely amount to knowledge. The Armstrong-Pitcher version of the
perceptual theory seems to dissolve the present difficulty, if it is true. But it isn’t
true. And even if it is, it doesn’t help.
6.3.4 Back to naiveté
One problem for Armstrong and Pitcher starts from the unsurprising fact that
salient kinds of perceptual objects have a proprietary vocabulary: we can identify
an object of audition as a loud noise, an object of vision as a red blob, an object of
olfaction as an acrid smell, and so forth. Disturbances are hardly lacking in
salience, yet on the Armstrong-Pitcher view they are an exception to this
generalization, hence the need for some technical vocabulary like ‘painful

32
See also Armstrong 1968: 314.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 PERCEPTION AND SENSATION

disturbance.’ It is mystifying why English (and, presumably, other natural lan-


guages) is expressively limited in this way.
Second, although Armstrong and Pitcher maintain that ‘pain’ does not refer to
disturbances, they can hardly deny that we can refer to them at all. In a perfectly
ordinary case of “pain in the foot,” there is a disturbance spatially located in the
foot. Since one is perceptually aware of it, it can be picked out with a demon-
strative; since one’s perceptual awareness is not vision, audition, olfaction, or
gustation, ‘feel’ is the appropriate verb. If you said ‘That is in my foot,’ or ‘I feel it’
(referring to the disturbance), you would be correct. However, if you continued
‘ . . . and it is a throbbing pain,’ you would be incorrect! The throbbing pain, if
is located anywhere at all, is in the brain, not the foot. Armstrong and Pitcher
save the truth of ‘I feel a pain’ for the phantom limb sufferer, but in doing so
merely move the counterintuitive falsity elsewhere.33
Third, it is hard to gainsay the semantic appearances.34 ‘I felt the needle and
then a sharp pain’ seems perfectly in order, which strongly indicates that ‘feel’
has its generic perceptual sense when complemented with ‘a pain.’ There is a
general methodological point here. Contemporary philosophy is littered with
failed convoluted semantics, wheeled in to save “commonsense,” “intuitions,” or
under-motivated theories. This strategy is something the philosophical toolkit
would be better off without.
Finally, denying that ‘pain’ refers to disturbances doesn’t solve the problem of
phantom limb pain; at best, it only solves a superficial linguistic version of the
problem. This is because the (apparent) absurdity does not need to be stated
using ‘pain’: it seems equally absurd to think that there could be hallucinations of
disturbances. Imagine you start off feeling a disturbance in your foot and, via a
“subjectively seamless transition” (Johnston 2004: 114), end up hallucinating one,
just as you might start off perceiving a tomato and end up hallucinating one. To
you, it is as if the disturbance continues to throb in your foot throughout this
period; in fact, the disturbance ended a few minutes ago. Could the disturbance
that is manifestly in your knee be nothing? Many philosophers, at any rate, have
in effect taken hallucinations of disturbances to be flatly impossible.

33
An objection (Block 1983: 517) to the view that pains can be spatially located in feet is this. The
view wrongly predicts that the argument ‘There’s a pain in my foot, my foot is in my sock, so there’s
a pain in my sock’ is valid. Short reply (one moral from a debate between Noordhof and Tye): the
objection proves too much, because holes can be spatially located in socks, and yet ‘There’s a hole in
my sock, my sock is in my shoe, so there’s a hole in my shoe’ is also invalid. See Noordhof 2001,
2005; Tye 2005a, 2005b; and also Hyman 2003: 20–1.
34
See also Hyman 2003: 6–15.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

. SENSATION 

The way out is simply to deny that the phantom limb pain sufferer feels a pain.
This is not some desperate ad hoc maneuver—the strong inclination to resist is
only to be expected, given the thesis of belief-dependence. Belief-dependence
predicts that when nociception seems to disclose a segment of the world of pain,
one will believe that that is how things are. Indeed, the apparent absurdity is
independent confirmation of that thesis. Of course, this is not to dismiss phan-
tom limb pain sufferers as malingerers—seeming to feel a pain is just as agonizing
as really feeling one. And with pains put in their proper place, in feet and knees,
the transparency account is just as plausible for nociception as it is for vision.
We have now reached this book’s Rubicon. Is this where transparency runs
out, perhaps ceding the vast remainder of our mental lives to the inner-sense
theory? Chapter 7 begins by arguing that there is no going back.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

7
Desire, Intention, and Emotion

Truth is the object of judgment, and good is the object of wanting; it does not
follow from this either that everything judged must be true, or that every-
thing wanted must be good.
Anscombe, Intention

7.1 Introduction
Chapters 5 and 6 argued that Evans had the epistemology of belief and perception
broadly right (sensation, being a form of perception, turns out to be covered too).
One comes to know that one believes that it’s raining and that one sees a duck by
a groundless inference from a worldly counterpart premise about, respectively,
the rain and the duck. The puzzle of transparency presents an initial obstacle, but
what does not kill us makes us stronger: when the puzzle is solved, the transpar-
ency procedure can be seen to explain both privileged and peculiar access. The
resulting transparency account of self-knowledge is inferentialist, detectivist, and
economical.
So far, so good. But the prospects of extending the transparency account to
other mental states might seem hopeless. Take knowledge of one’s own desires,
and return to an example mentioned in section 1.2. What in my environment
could clue me in to the fact that I want a beer? Clearly neither the presence nor
absence of beer will do. I may believe that free beer is round the corner yet not
have the slightest inclination to think I want a beer. In Moran’s terminology
(2001: 61), what could the question Do I want a beer? possibly be transparent to?
Nothing, apparently.
Reflections like these quickly lead to a broad pessimistic lesson:

[W]e can answer . . . questions about current desires, intentions, and imaginings, ques-
tions like: ‘What do you want to do?’; ‘What are you going to do?’; ‘What are you
imagining?’ Our ability to answer these questions suggests that the ascent routine strategy
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. THE CASE FOR UNIFORMITY 

[in effect, the transparency procedure1] simply cannot accommodate many central cases
of self-awareness. There is no plausible way of recasting these questions so that they are
questions about the world rather than about one’s mental state. As a result, the ascent
routine strategy strikes us as clearly inadequate as a general theory of self-awareness.
(Nichols and Stich 2003: 194)2

Section 7.2 argues that if the transparency procedure applies to belief and
perception, it must apply across the board, thus giving “a general theory” of
self-knowledge. So either the account given in Chapters 5 and 6 is wrong or else,
pace Nichols and Stich, it can be extended after all. The following sections
(7.2–7.6) then sketch the needed extensions for desire, intention, and emotion.
Next and finally, Chapter 8 does the same for memory, imagination, and thought.

7.2 The case for uniformity


The issue is the status of what Boyle calls the Uniformity Assumption, “the
demand that a satisfactory account of our self-knowledge should be fundamen-
tally uniform, explaining all cases of ‘first-person authority’ in the same basic
way” (2009: 141). As noted at the start of Chapter 3, both Davidson and Moran
deny the Uniformity Assumption. As Boyle defends a version of Moran’s
account, he denies it too.3
However, there are strong considerations in favor of the Uniformity Assump-
tion. If the epistemology of mental states is not uniform, then dissociations are to
be expected. Suppose that a transparency account is correct for knowledge of our
beliefs, and that an inner-sense account is correct for desires. Then one would
expect to find a condition in which this faculty of inner sense is disabled, sparing
the subject’s transparent capacity to find out what she believes. Her knowledge of
what she believes is similar to ours, but knowledge of her own desires can only be
achieved by “third-person” means. Yet such conditions do not seem to occur. In
this respect self-knowledge is unlike environmental knowledge, knowledge of our
immediate environment. As it happens, environmental knowledge is achieved
through a diverse range of largely independent capacities—vision, audition,
olfaction, and so on. Accordingly, dissociations are common: one’s vision can

1
‘Ascent routine’ is a phrase of Gordon’s (1996), whom Nichols and Stich are specifically
criticizing.
2
See also Finkelstein 2003: postscript, Bar-On 2004: 114–18, Goldman 2006: 240.
3
See also Schwitzgebel 2012, Samoilova 2016.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

be impaired or absent, for instance, sparing one’s capacity to find out about
sounds and smells.
The Uniformity Assumption is consistent with extravagance, but a related
failure of dissociation indicates that the correct theory of self-knowledge is also
economical. As Shoemaker acutely observed, “self-blindness” is not an actual
condition (see section 2.2.9): there are no individuals who have only third-person
access to their mental lives, with spared rational and other epistemic capacities.
The obvious explanation of this fact is that rationality and other epistemic
capacities are all that is needed for self-knowledge. And since the world-to-
mind approach seems to be the only economical theory of self-knowledge that
could explain both privileged and peculiar access, this is a reason for taking it to
apply across the board.
Contrariwise, if the epistemology of some mental states cannot be forced into
the world-to-mind mold, then the transparency account for belief and perception
is on shaky ground.4

7.3 Desire and DES


In fact, the quotation from Nichols and Stich is much too hasty. Although it might
superficially appear that “there is no plausible way of recasting” a question about
one’s desires as a “question about the world,” a second glance suggests otherwise.
The issue of where to dine arises, say. My accommodating companion asks me
whether I want to go to the sushi bar across town or the Indian restaurant across
the street.5 In answering that question, I attend to the advantages and drawbacks
of the two options: the tastiness of Japanese and Indian food, the cool Zen
aesthetic of the sushi bar compared to the busy, garish décor of the Indian
restaurant, the bother of getting across town compared to the convenience of
walking across the street, and so on. In other words, I weigh the reasons for
the two options—the “considerations that count in favor,” as Scanlon puts it

4
Roche 2013 points out that dissociations might be hard to detect, given that confabulation is a
real phenomenon, and that Ryleanism can account for a significant portion of self-knowledge. Self-
blindness, in other words, is more analogous to color blindness, a condition of deficient color vision
that is usually quite unobvious, than to blindness, which is very easy to spot. Indeed, recall that
Shoemaker himself argues (see section 2.2.9) that an allegedly self-blind person would appear
perfectly normal; as noted, this can be thought of as a strategy for confabulation. (The color blind
also adopt strategies which disguise their condition, to themselves as well as others.) Roche’s point is
well taken, and the case for uniformity is not intended to be conclusive. But we have already seen
ways in which some dissociations could be detected (see the discussion of Shoemaker and also
section 6.2.3), and similar methods could be used for others.
5
Although there are differences of usage between ‘desire’ and ‘want,’ in this chapter the two are
treated as equivalent.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. DESIRE AND DES 

(1998: 17), of going to either place. These reasons are not facts about my present
psychological states; indeed, many of them are not psychological facts at all.6
Suppose I determine that the Indian option is the best—that there is most
reason to go to the Indian restaurant. (This might be the result of agonized
deliberation; more typically, it will be a snap judgment.) Once I have this result in
hand, which is not (at least on the face of it) a fact about my present desires,
I then reply that I want to go to the Indian restaurant.
This example is one in which I “make up my mind” and form a new desire:
prior to being asked, I lacked the desire to go to the Indian restaurant. But the
Evans-style point about looking “outward—upon the world” still holds when
I have wanted to go to the Indian restaurant for some time. Of course, often when
in such a condition, I can recall that I want to go. But on other occasions the
process seems less direct. What immediately comes to mind is the non-
psychological fact that the Indian restaurant is the best option; and it is (appar-
ently) by recalling this that I conclude I want to go there.7
An initial stab at the relevant rule for desire—specifically, the desire to act in a
certain way8—is therefore this:
DESi If ϕing is the best option, believe that you want to ϕ.
This is not a bad fit for a restricted range of cases, but the general hypothesis that
we typically know what we want by following (or trying to follow) DESi has some
obvious problems. In particular, the hypothesis both undergenerates, failing to
account for much knowledge of our desires, and overgenerates, predicting judg-
ments that we do not make.
To illustrate undergeneration, suppose that I am in the happy condition of also
wanting to eat at the sushi bar. Eating at either place would be delightful, although
on balance I prefer the Indian option. In such a situation, I can easily know that
I want to eat at the sushi bar, despite not judging it to be the best option.9
To illustrate overgeneration, suppose that I really dislike both Japanese and
Indian cuisine, and I don’t much care for my companion’s company either. Still,
he would be terribly offended if I bailed out of dinner, and would refuse to
publish my poetry. I don’t want to eat at the Indian restaurant but—as children

6
On reasons as facts see, e.g., Thomson 2008: 127–8.
7
Compare the earlier discussion of Moran in section 3.3.
8
Many desires are for other things, of course, some involving oneself and some not: one may
want to be awarded the Nobel Prize, or want Pam to get promoted, or want global warming to end,
and so forth. These other sorts of desires do not raise any intractable difficulties of their own, and so
for simplicity only desires to act in a certain way will be explicitly treated.
9
As this case illustrates, to want something is not to prefer it over all other options. This chapter
concentrates on the epistemology of desire, not the (closely related) epistemology of preference.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

are often told—sometimes you have to do something you don’t want to do. The
Indian is the best of a bad bunch of options, and I accordingly choose it. Despite
knowing that eating at the Indian restaurant is the best course of action, I do not
follow DESi and judge that I want to eat there. Later, in between mouthfuls of
unpleasantly spicy curry, I hear my companion droning on about his golf swing,
and I think to myself that I really do not want to be doing this.
The description of this example might raise eyebrows, since it is something of a
philosophical dogma that intentional action invariably involves desire—on this
view, if I slouch to the Indian restaurant, resigned to a miserable evening,
I nonetheless must have wanted to go there. Whether this is anything more
than dogma can be set aside, because (wearing my Plain Man hat) I will not agree
that I want to go to the Indian restaurant. So, even if I do want to go to the Indian
restaurant, I am ignorant of this fact, and what primarily needs explaining is the
Plain Man’s self-knowledge, not the self-knowledge of sophisticated theorists.10
In the undergeneration example, why do I think I want to go to the sushi bar?
Going there is not the best option, all things considered, but it is a good option, or
(much better) a desirable one, in the Oxford English Dictionary sense of having
“the qualities which cause a thing to be desired: Pleasant, delectable, choice,
excellent, goodly.” Going to the sushi bar is not merely desirable in some respects,
but desirable tout court. The sushi bar is a short cab ride away, the saba is
delicious, an agreeable time is assured, and so on. If the Indian restaurant turns
out to be closed, that is no cause to investigate other alternatives: going home and
heating up some leftovers, getting takeaway pizza, and so on. The sushi bar is a
more than adequate alternative. In the overgeneration example, by contrast, the
Indian option is not desirable, despite being the best.

10
One of the main contemporary sources for the philosophical dogma is Nagel:
whatever may be the motivation for someone’s intentional pursuit of a goal, it
becomes in virtue of his pursuit ipso facto appropriate to ascribe to him a desire for
that goal . . . Therefore it may be admitted as trivial that, for example, considerations
about my future welfare or about the interests of others cannot motivate me to act
without a desire being present at the time of action. That I have the appropriate desire
simply follows from the fact that these considerations motivate me. (1970: 29)
But Nagel gives no actual argument. His conclusion does not follow from the fact that “someone’s
intentional pursuit of a goal” requires more than belief, because there are candidates other than
desire that can take up the slack, for instance intention.
A charitable interpretation is that Nagel is using ‘desire’ in the technical Davidsonian sense, to
mean something like ‘pro-attitude’ (cf. Dancy 2000: 11). That appears to be true of some other
philosophers who follow him, such as Smith (1994: ch. 4), although not of Schueler (1995).
According to Schueler, Nagel’s claim is false in one sense of ‘desire,’ and true in another “perfectly
good sense” of the word (29). However, he provides little reason to think that ‘desire’ is polysemous
in this way.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. DESIRE AND DES 

So these two problems can both apparently be solved simply by replacing ‘best’
in DESi by ‘desirable,’ yielding the rule:
DES If ϕing is a desirable option, believe that you want to ϕ.11
Let us first examine whether DES is a good (knowledge-conducive) rule—if it is
not, other objections are moot.
The rule BEL, recall, is:
BEL If p, believe that you believe that p.
As noted in section 5.2.3, BEL is self-verifying: a minor qualification aside, if one
follows it, one’s second-order belief is true. As argued there, this observation
defuses the objection that following BEL cannot yield knowledge because the fact
that p is not a reliable indication that one believes that p.12
A similar objection applies to DES: that ϕing is a desirable option is not a
reliable indication that one wants to ϕ. Pam’s walking three miles to work
tomorrow is desirable, because she’ll then avoid hours in an unexpected traffic
jam, and get promoted for her foresight and dedication, yet (not knowing these
facts) Pam wants only to drive.
Unfortunately, a similar reply does not work: DES is not self-verifying. Cases of
accidie are compelling examples. Lying on the sofa, wallowing in my own misery,
I know that going for a bike ride by the river is a desirable option. The sun is
shining, the birds are twittering, the exercise and the scenery will cheer me up;
these facts are easy for me to know, and my torpor does not prevent me from
knowing them. If I concluded that I want to go cycling, I would be wrong. If
I really did want to go, why am I still lying on this sofa? It is not that I have a
stronger desire to stay put—I couldn’t care less, one way or the other.
Still, this example is atypical. One’s desires tend to line up with one’s know-
ledge of the desirability of the options; that is, known desirable options tend to be
desired. (Whether this is contingent, or a constitutive fact about desire or
rationality, can for present purposes be left unexamined.) What’s more, even
though there arguably are cases where one knows that ϕing is desirable and
mistakenly follows DES, ending up with a false belief about what one wants, the
case just described is not one of them. I know that cycling is desirable yet fail to
want to go cycling, but I do not follow DES and falsely believe that I want to go

11
Cf. Shoemaker 1988: 47–8; Gallois 1996: 141.
12
Recall that S follows rule R (‘If conditions C obtain, believe that p’) on a particular occasion iff
on that occasion: S believes that p because S recognizes that conditions C obtain (section 5.2.1).
S tries to follow R iff S believes that p because S believes that conditions C obtain.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

cycling. Lying on the sofa, it is perfectly clear to me that I don’t want to go


cycling. (Just why this is so will be examined later, in section 7.3.2.)
Thus, although DES is not self-verifying, it is, like the rule SEE in section 6.2.6,
practically self-verifying: almost invariably, if Pam follows DES, her belief about
what she wants will be true. Also like SEE (and BEL) beliefs produced by following
DES will safe. And that is enough to allay the concern that following DES cannot
yield knowledge.
As noted in section 5.2.5, BEL is also strongly self-verifying. That is, if one tries
to follow it—if one believes that one believes that p because one believes that p—
then one’s second-order belief is true. That feature of BEL is the key to explaining
privileged access for belief. Similarly, since one’s desires tend to line up with one’s
beliefs about the desirability of the options, whether or not those beliefs are
actually true, DES is strongly practically self-verifying.
7.3.1 Circularity
At this point a worry about circularity might arise: perhaps, to find out that
something is desirable, one has to have some prior knowledge of one’s desires. If
that is right, then at the very least a significant amount of one’s knowledge of
one’s desires remains unexplained. This section examines some variations on
this theme.13
In its crudest form, the circularity objection is simply that the relevant sense of
‘desirable option’ can only mean desired option. If that is so, then DES is certainly a
good rule, but only in a trivial limiting sense. Unpacked, it is simply the rule: if you
want to ϕ, believe that you want to ϕ. And to say that one follows this rule to gain
knowledge of one’s desire to ϕ is to say that one comes to know that one wants to ϕ
because one recognizes that one wants to ϕ. True enough, but hardly helpful.
However, this version of the objection is a little too crude, leaving no room for
any other features to count toward the desirability of an option. (Recall examples
of such features quoted from the OED: “pleasant, delectable, choice, excellent,
goodly.”) A slightly less crude version admits that other features are relevant, but
insists that a necessary condition for an option’s being desirable is that one desire
it. Is this at all plausible?
No. As many examples in the voluminous literature on “reasons” bring out,
desires rarely figure as considerations for or against an action, even the restricted
set of considerations that bear on whether an action is desirable. The Indian
restaurant example is a case in point. Here is another. Suppose I see that an

13
The difference between following DES and trying to follow it is not relevant to any circularity
worry, so for simplicity let us focus exclusively on the former.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. DESIRE AND DES 

interesting discussion about the mind–body problem has started in the depart-
ment lounge, and I am deciding whether to join in and sort out the conceptual
confusion. I wonder whether the participants would applaud my incisive
remarks, or whether I might commit some terrible fallacy and be overcome
with embarrassment, but I do not wonder whether I want to join in. Suppose
I want to attend a meeting which is starting soon, and that this desire will be
frustrated if I stop to join the discussion in the lounge. I do not take this to be a
consideration in favor of not joining in, but rather (say) the fact that turning up
late to the meeting will be thought very rude.
The force of these sorts of examples can be obscured by conflating two senses
of ‘reason.’ Suppose I want to join the discussion, and that is what I do. So a
reason why I joined in was that I wanted to. Doesn’t that show, after all, that my
wanting to join in was a reason, namely a reason for joining in? No, it does not.
That I wanted to join the discussion is a reason in the explanatory sense, as in
‘The failure of the blow-out preventer was the reason why the Deep Water
Horizon exploded.’ But it does not follow that this fact is a reason in the
(operative) normative sense, the “consideration in favor” sense of ‘reason.’
There is no straightforward connection between an option’s being desirable
and its actually being desired that would support a version of the circularity
objection. Could a connection between desirability and one’s counterfactual
desires do any better?
As an illustration, consider the claim that ϕing is desirable iff if conditions
were “ideal,” the agent would want to ϕ. All such analyses have well-known
problems; for the sake of the argument let us suppose that this one is correct.14
(Since the right-hand side is surely not synonymous with the left, take the
biconditional merely to state a necessary equivalence.) Does this analysis of
desirability suggest that sometimes one needs prior knowledge of one’s desires
to find out that ϕing is desirable?
First, take a case where one is not in ideal conditions. To return to the example
at the end of section 7.3, suppose I am lying miserably on the sofa. I know that
cycling is desirable; I also know, let us grant, the supposed equivalent counter-
factual, that if conditions were ideal, I would want to go cycling. For circularity to
be a worry here, it would have to be established that (a) I know that cycling is
desirable by inferring it from the counterfactual, and (b) I need to know some-
thing about my present desires to know the counterfactual. Now whatever “ideal

14
For a more sophisticated attempt, see Smith 1994: ch. 5. It is worth noting that Smith’s
conception of an act’s being desirable, namely the agent’s having “normative reason to do [it]”
(132), is broader than the conception in play here.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

conditions” are exactly, they are intended to remove the barriers to desiring the
desirable—drunkenness, depression, ignorance, and so on. And although I do not
actually want to get on my bike, the enjoyment and invigorating effects of cycling
are apparent to me. Regarding (b), it is quite unclear why I need to know
anything about my present desires to know that if the scales of listlessness were
to fall from my eyes, I would desire the manifestly desirable. And regarding (a),
the most natural direction of inference is from left to right, rather than vice versa:
my knowledge of desirability of cycling—specifically, its enjoyment and invigor-
ating effects—come first, not my knowledge of the counterfactual.
Second, take a case where one is in ideal conditions. I am lying on the sofa, not at
all miserable. I know that cycling is desirable, and I know that I want to go cycling.
I also know, we may grant, that conditions are ideal. Given the equivalence, do
I know that cycling is desirable by inferring it from the counterfactual, which I infer
in turn from the truth of both the antecedent and the consequent? If so, then there
is a clear problem of circularity. But how do I know that the antecedent is true,
that conditions are ideal? Since the chief purchase I have on “ideal conditions” is
that they allow me to desire the desirable, the obvious answer is that I know that
conditions are ideal because I know that cycling is desirable and that I want to go
cycling. But then the epistemological direction is again from left to right, rather
than—as the objector would have it—from right to left. If I know that cycling is
desirable prior to knowing that conditions are ideal, then (granted the equivalence)
I can infer the counterfactual from the fact that cycling is desirable.
The circularity objection is, at the very least, hard to make stick. Let us now
turn to some complications.

7.3.2 Defeasibility
To say that we typically follow (or try to follow) rule R is not to say that we always
do. The rule:
WEATHER If the skies are dark gray, believe that it will rain soon
is a good enough rule of thumb, but it is defeasible—additional evidence (or
apparent evidence) can block the inference from the premise about the skies to
the conclusion about rain. For example, if one knows (or believes) that the trusted
weather forecaster has confidently predicted a dry but overcast day, one might
not believe that it will rain soon despite knowing (or believing) that the skies are
dark gray.
Given that DES is only practically (strongly) self-verifying, one might expect
that rule to be defeasible too. And indeed the example of accidie, used earlier to
show that DES is only practically self-verifying, also shows that it is defeasible.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. DESIRE AND DES 

In that example, I am lying miserably on the sofa, contemplating the pleasures


of a bike ride in the sunshine. This is not just a situation in which I know that
cycling is a desirable option but nevertheless do not want to go cycling. It is also a
situation in which I do not believe that I want to go cycling. Yet if I slavishly
follow DES, I would believe that I want to go cycling. But what could block this
inference?
I believe that I am not going to go cycling, but that is not why I don’t think
I want to go: I sometimes take myself to want to ϕ when I believe that I am not
going to. For example, I really want to read Mind and World this evening, but
that is not going to happen because I don’t have the book with me.
A better suggestion is that I believe I do not want to go cycling because I believe
I intend to remain on the sofa. I do not believe I intend to avoid reading Mind and
World this evening, so at least the suggestion does not falsely predict that I will
take myself to lack the desire to read Mind and World. However, it is obviously
not right as it stands. Suppose, to return to the earlier restaurant example, I want
to go to the Indian restaurant and also want to go to the sushi bar, and then form
the intention to go to the Indian restaurant on the grounds that this option is
slightly more desirable. When I realize that I have this intention, I will not
thereby refuse to ascribe a desire to go to the sushi bar: if the Indian restaurant
turns out to be closed, I might say to my companion, ‘No worries, I also wanted to
eat Japanese.’
This highlights a crucial difference between the cycling and restaurant
examples: in the cycling case I do not think that remaining on the sofa is a
desirable option—I intend to stay there despite realizing that there is little to be
said for doing so. I don’t think I want to go cycling because, if I did, why on earth
don’t I go? The means to go cycling are ready to hand, and the alternative is quite
undesirable.
In general, then, this is one way in which DES can be defeated. Suppose one
knows that ϕing is a desirable option, and considers the question of whether one
wants to ϕ. One will not follow DES and conclude one wants to ϕ, if one believes
(a) that one intends to ψ, (b) that ψing is incompatible with ϕing, and (c) that
ψing is neither desirable nor all-things-considered better than ϕing.
That explains why I don’t follow DES in the cycling case, and so don’t take
myself to want to go cycling. Here the action I intend is not the one I think
desirable, and neither is it the one I think best, all things considered. More
common cases of action without desire are when the intended action is taken
to be the best, as in the earlier restaurant example with the tedious dinner
companion. Dinner at the Indian restaurant will be terribly boring and I won’t
have a good time; nonetheless, it is the best course of action available, perhaps
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

even beating out other options (like staying at home with a good book) that are
actually desirable. I intend to go, but I really don’t want to.
Ashwell (2013) objects that even if I think two options are desirable I still may
not take myself to desire one of them. Suppose I think blobbing out on the couch
is desirable, and so (c) above does not apply—according to Ashwell, I still might
not take myself to want to go cycling. But here the relevant sense of ‘desirable’ is
easy to miss. Cycling can strike one as having significant positive features, yet
they can all be trumped by a negatively appearing feature—that cycling will be
tiring, say. In such a situation, one does not think cycling desirable, in the
intended sense. However, it would be perfectly fine to say ‘Cycling is desirable,’
meaning something like ‘Cycling ought to be desired.’ In fact, Ashwell changes
the original cycling example in Byrne 2011 to one of “going out and exercising”
(252). This minor alteration has the effect of tilting the salient interpretation of
‘desirable’ toward ought-to-be-desired. Notoriously, exercising can appear in an
overall negative light, even though its numerous benefits are manifest. If, lying on
the comfortable sofa, I think that going out and exercising is desirable in the
ought-to-be-desired sense, and yet do not take myself to want to go out and
exercise, that is no problem for the present account.
Something else needs explaining in the cycling example, though. It is not just
that I fail to believe that I want to go cycling—I also know that I lack this desire.
I also know that I lack the desire to go to the Indian restaurant. So how do I know
that I don’t want to go cycling, or don’t want to go to the Indian restaurant? (Read
these with the negation taking wide scope: not wanting to go, as opposed to
wanting not to go.)
In the boring dinner example, I know that going to the Indian restaurant is not
desirable—indeed, it is positively undesirable. An obvious explanation of how
I know that I do not want to go is that I follow this rule:
NODES If ϕing is an undesirable option, believe that you do not want to ϕ15
NODES does not apply in the accidie example, of course, because I know that
cycling is desirable. But the earlier discussion of that case already shows how
I know that I lack the desire to cycle: if I really have that desire, what is to stop me
getting on my bike? The gleaming marvel of Italian engineering is right there, and
staying on the sofa has nothing to be said for it.16

15
A similar explanation can be given of the truth of the narrow-scope reading—why I also know
that I want not to go to the Indian restaurant.
16
Another complication deserves brief mention. Suppose (to take an example from Fara 2013)
I want to catch a meal-sized fish. Then I also want to catch a fish. (At least, that is what I would say;
let us assume that such an assertion expresses my belief that I want to catch a fish.) But catching a
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. INTENTION AND INT 

7.3.3 Connections
As the discussion of section 7.3.2 brings out, the epistemology of desire is not self-
contained, in at least two ways.
First, although one’s own desires are not among the features that make for
the desirability of an option, one’s other mental states sometimes are. For
instance, I might well conclude that I want to go to the Indian restaurant partly
on the basis of the fact that I like Indian food: I like, say, andar palak and plain
naans. Liking andar palak (in the usual sense in which one likes a kind of food)
is not to be equated with wanting to eat it. One may want to eat broccoli for
health reasons without liking it; conversely, one may like double bacon cheese-
burgers but not want to eat one. Liking andar palak is doubtfully any kind of
desire at all. There is no clear circularity worry here, but the considerations in
favor of uniformity in section 7.2 indicate that the epistemology of likings
should be in the same world-to-mind style. And that is not at all implausible:
if I sample andar palak for the first time, and someone asks me if I like it, I turn
my attention to its flavor. Does it taste good or bad? There is little reason to
think that this involves investigating my own mind, as opposed to the andar
palak itself: a lowly rat, who is presumably short on self-knowledge, can easily
detect good and bad tastes.17
Second, section 7.3.2 proposed that the complete epistemology of desire partly
depends on the epistemology of intention. Conveniently, that is our next topic.18

7.4 Intention and INT


Start with an everyday example.19 I am deciding whether to go to a dinner party
on the weekend, or to stay home and read Making it Explicit. I weigh the pros and
cons of the two options, and finally plump for the dinner party. That is, I plan, or

fish is not desirable, because size matters—a minnow will not satisfy me at all. I cannot have arrived
at the conclusion that I want to catch a fish by inference from the premise that catching a fish is
desirable, so how did I arrive at it? Short answer: by inference from (perhaps inter alia) the premise
that I want to catch a meal-sized fish, which itself is the result of following DES (or trying to)
(cf. Braun 2015: 158–60).
17
See, e.g., Berridge and Robinson 2003: 509.
18
For other accounts of the epistemology of desire in the spirit of transparency, see Moran 2001
(especially 114–20) and Fernández 2007. For critical discussion of Moran, see Ashwell 2013. Lawlor
2009 defends a neo-Rylean account (see also Cassam 2014: ch. 11; and for criticism, see Boyle 2015).
19
For a proposal similar to the one suggested here, see Setiya 2011; one difference is that Setiya
does not take his account to be inferential (184). For a neo-Rylean account, see Paul 2012, 2015; the
former paper criticizes some in-the-spirit-of-transparency suggestions in Moran 2001.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

intend, to go to the dinner party. How do I know that I have this intention? The
answer hardly leaps to mind, as Anscombe observes:
[W]hen we remember having meant to do something, what memory reveals as having
gone on in our consciousness is a few scanty items at most, which by no means add up to
such an intention; or it simply prompts us to use the words ‘I meant to . . . ’, without even a
mental picture of which we judge the words to be an appropriate description. (1957: 6)

Going to the dinner party is, I think, the best option out of the two available,
which suggests this rule:
INTi If ϕing is the best option, believe you intend to ϕ.

The hypothesis that we typically know what we intend by following (or trying to
follow) INTi both undergenerates, failing to account for some knowledge of our
intentions, and overgenerates, predicting judgments we do not make.
To illustrate overgeneration, suppose I know that there is everything to be said
for going to the dinner party, but am overcome with listlessness and cannot bring
myself to go out. In such a situation, I will have no inclination to follow INTi, and
believe that I intend to go.
This problem is not necessarily fatal. As we saw in section 7.3.2, rules may be
defeasible. Perhaps additional evidence (or apparent evidence) can block the
inference in the case of accidie. But what could that additional evidence be?
I know I am listless, but sometimes that doesn’t prevent me from intending to
rouse myself; what’s more, on such occasions knowing that I am listless does not
prevent me from knowing that I intend to rouse myself. I also know that I lack the
desire to go to the dinner party, but sometimes lacking the desire to ϕ does not
prevent me from intending to ϕ; what’s more, on such occasions knowing that
I lack the desire to ϕ does not prevent me from knowing that I intend to ϕ.
This problem need not detain us further, because a problem of undergenera-
tion is decisive. INTi fails to accommodate cases where no option is better than all
the others. Suppose I am faced with a choice of adding the vodka and then the
orange juice, or adding the orange juice and then the vodka. These two options
are, I think, equally good, yet I can easily know that I intend to add the vodka
first.20 And on the face of it, I know that I have this intention in the way I usually
know my intentions.
A more promising idea exploits the close connection between the intention to
ϕ and the belief that one will ϕ. As Anscombe 1957 points out, one expresses the

20
Bratman 1985 gives this sort of example as an objection to Davidson 1978, which identifies the
intention to ψ with the judgment-cum-“pro-attitude” that ψing is “desirable.” Anscombe suggests
and quickly rejects something similar (1957: 2).
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. INTENTION AND INT 

intention to ϕ by asserting that one will ϕ. After I have formed the intention to go
to the dinner party I might well announce that I will go, thus conveying that
I have this intention. Before I form the intention to go, the issue is open. Will I go
or not? Forming the intention to go to the dinner party is making up one’s mind
to go. These and other considerations have led many to claim that intention
entails belief: if one intends to ϕ, it follows that one believes that one will ϕ.
Indeed, Velleman (1989) and Setiya (2007) have argued that intentions simply
are beliefs of a certain kind. Their view is controversial, but even the mere claim
of entailment is disputed.21 Still, it is not disputed that in paradigm cases like the
example of the dinner party, I believe that I will do what I intend.
When deciding whether to go to the dinner party, my attention is directed to
the available courses of action, not to my own mind. In the end, I decide to go—‘I
will go,’ I say. Do I believe that I will go because I have inferred that I will go from
the premise that I intend to go? If intention entails belief, then this is a non-
starter: the entailment explains why I believe that I will go, given that I intend to,
and no inference from the premise that I intend to is needed. The suggestion is
not much better if the entailment does not hold. Noting my frequent past failures
to act in accordance with my intentions, any reasonable observer who knew that
I intended to go to the dinner party would not believe that I will go—that would
be a rash conclusion to draw from the evidence. Since (we may suppose) I have
the same evidence, if the inferential suggestion is correct, I should be equally leery
about concluding that I will go. Yet I won’t be.
That at least allows us to explore the suggestion that things are precisely the
other way around: one concludes that one intends to ϕ from the premise that one
will ϕ. That is, perhaps the rule for intention is:
INT If you will ϕ, believe you intend to ϕ.
Like the first suggested rule, there are problems of overgeneration; this time,
however, they have solutions.22

21
Pro the entailment: Grice 1971, Harman 1986: ch. 8, Velleman 1989: ch. 4 (for a later
qualification to Velleman’s view that intentions are beliefs, see Velleman 2000: 195, n. 55), Setiya
2007: pt. 1. Con: Bratman 1987: 37–8, Holton 2008, Paul 2009.
22
As Paul 2015 points out, there is an undergeneration problem if the subject can know that she
intends to ϕ while not believing that she will ϕ (perhaps because she “consider[s] it a possibility that
she will forget to do as she intends” (1535)). (This objection is also made in Samoilova 2016.)
However, there is enough noise surrounding the data to make the existence of this problem unclear.
First, ‘I intend to ϕ’ has a use somewhat like the parenthetical use of ‘I believe that p’ (see fn. 19 in
Chapter 3), as indicating a tentative commitment to ϕing. In that use, the speaker is not flatly
asserting that she intends to ϕ. (‘I intend to come to your party,’ ‘So you’ll be there, right?’
[Clarifying reply:] ‘Well, I’ll really try/really intend to try.’) Second, given suitable stage-setting
one can combine assertions that self-ascribe knowledge with an acknowledgment of the possibility
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

7.4.1 Overgeneration problems


One overgeneration problem can be illustrated with a nice example of Anscombe’s:
“if I say ‘I am going to fail this exam’ and someone says ‘Surely you aren’t as bad at
the subject as that’, I may make my meaning clear by explaining that I was
expressing an intention, not giving an estimate of my chances” (Anscombe 1957:
1–2). I am going to fail this exam because I skipped the lectures and didn’t do the
reading. Yet although my failure is evident to me, I do not follow INT and conclude
that I intend to fail; to the contrary, I believe that I intend to try to pass.
It might be thought that this problem can be solved with an insertion of
‘intentionally’:
INTii If you will intentionally ϕ, believe you intend to ϕ.
That appears to cope with the exam example, because although I believe I will fail,
I do not believe I will intentionally fail. However, setting aside various issues and
unclarities connected with ‘intentionally’ (which might anyway smuggle inten-
tions back into the picture), this is not sufficiently general, because one can intend
to do something unintentionally. Altering an example of Davidson’s (1971: 47),
I might intend to trip unintentionally (one means to that end is to walk looking
up at the sky), and in such a case I typically have no difficulty in knowing that this
is what I intend.
A second problem of overgeneration arises from the phenomenon of foreseen
but unintended consequences: one may foresee that one will ϕ, and yet not intend
to ϕ, because ϕing is an “unintended consequence” of something else that
one intends. More to present purposes, in such a situation one may believe that
one will ϕ, and yet not believe that one intends to ϕ. To take an example of
Jonathan Bennett’s, familiar from the literature on the Doctrine of Double Effect,
“the tactical bomber . . . intends to destroy a factory and confidently expects his
raid to have the side effect of killing ten thousand civilians” (Bennett 1981: 96).
The tactical bomber knows he will kill the civilians, yet he does not intend to kill
them; in Pentagon-speak, their deaths are “collateral damage.” If the tactical
bomber followed INT, he would conclude that he intends to kill the civilians;
yet—we may suppose—he disavows having any such intention. Again, to take an
example of Bratman’s, “I intend to run the marathon and believe that I will

of error (‘I know that I’ll be at the conference next week, but I’ll get the trip cancelation insurance
just in case’); that does not show that the subject doesn’t believe that she’ll be at the conference next
week. Similar caution is indicated for ‘I intend to be at the conference next week, but I’ll get the trip
cancelation insurance just in case.’
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. INTENTION AND INT 

thereby wear down my sneakers” (1984: 399); I do not intend to wear them down,
and neither do I believe that I intend to wear them down.
The examples illustrating these two problems for INT are cases where one
knows what one will do (or, at least, forms a belief about what one will do) on the
basis of evidence, but does not ascribe the intention. I know that I am going to fail
the exam because I know that I am poorly prepared; I know that I will be wearing
down my sneakers because I know that I will be wearing them when I run the
marathon. That is, I know on the basis of evidence that I will fail the exam and
wear down my sneakers.
However, as Anscombe points out, sometimes one’s knowledge of what one
will do is not arrived at by these familiar means. (Her official statement exclu-
sively concerns knowledge of what one is doing, but it is clear that the point is
supposed to extend to knowledge of what one will do.) As she notoriously puts it,
one can know what one is (or will be) doing “without observation” (1957: 13).
And those present and future actions that can be known “without observation”
are those that one intends to perform: if I know without observation that I will fail
the exam, I intend to fail the exam; if I know without observation that I will run in
the marathon tomorrow, I intend to run in the marathon tomorrow.
The phrase ‘knowledge without observation’ is misleading, as it might be
expected to work like, say, ‘knowledge without googling.’ If one knows P without
googling then, if P entails Q, one is in a position to know Q without googling.
But such a closure principle isn’t right for ‘knowledge without observation,’ as
Anscombe understands it. To adapt one of her examples: I intend to paint the wall
yellow, and know that I will paint the wall yellow. That I will paint the wall yellow
tomorrow entails that the wall (and paint) will exist tomorrow. But Anscombe does
not want to say that I can know that the wall will exist tomorrow “without
observation.”
If we gloss ‘knowledge without observation’ as ‘knowledge not resting on
evidence,’ then this suggests one condition under which INT is defeasible, and a
solution to the two overgeneration problems discussed above. Suppose one
knows that one will ϕ, and considers the question of whether one intends to ϕ.
One will not follow INT if one believes that one’s belief that one will ϕ rests on good
evidence that one will ϕ. Take, for example, my belief that I will wear down my
sneakers. I have, I think, good evidence for this, namely that I will run in the
marathon tomorrow.23 Nothing suggests that running in the marathon tomorrow is

23
Paul (2015) thinks that my evidence is, rather, that I intend to run in the marathon tomorrow.
This, she argues, is problematic: it requires that I already know my intentions, “which means that
[INT] can be employed in a truth-conducive way only if it presumes the very achievement it is
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

my means to the end of wearing down my sneakers—I am not engaged in testing


soles for the sneaker company, for example. So a reasonable conclusion is that
I believe that I will wear down my sneakers because, and only because, I have
good evidence for it, namely that I will run in the marathon tomorrow. In other
words, my belief that I will wear down my sneakers rests on good evidence that
I will wear them down.24
Granted the assumption that one’s evidence is one’s knowledge (E=K, see
section 1.1), an inquiry into one’s evidence is tantamount to an inquiry into one’s
knowledge. So the defeating condition in effect concerns one’s own knowledge,
the epistemology of which has already been given an independent account.
(Recall that section 5.5.1 defended the following rule for knowledge: If p, believe
that you know that p.) We saw earlier that the complete epistemology of desire
partly depends on the epistemology of intention; similarly, the complete epis-
temology of intention partly depends on the epistemology of knowledge.
Other defeaters may well be necessary. Paul offers this example: “Out of
anxiety, I may be convinced on the basis of little evidence that I will trip and
fall as I walk in front of a large crowd, but I certainly do not intend to do this”
(2015: 1534). I believe that I will trip, no evidence for this comes to mind, yet I do
not (try to) follow INT and conclude that I intend to trip. But here a touch of
Ryleanism can block the inference. My anxiety about making a fool of myself, for
example, is an excellent reason for thinking that I do not intend to trip, just as it is
in the case of another. Of course, this assumes that I know that I am anxious—a
segue to our final topic.

7.5 Emotion
The first-person epistemology of emotion is considerably broader than the
epistemology of desire and intention. A comprehensive treatment would require
a book in itself; the aim here is merely to make a transparency account plausible
by concentrating on a particular case.

supposed to explain” (1535). But, first, there is no obvious circularity: I could come to know that
I intend to run in the marathon tomorrow by following INT, and then use that item of evidence in the
way Paul suggests. Second, and more importantly, Paul thinks that my evidence can’t (always)
simply be the fact that I will run in the marathon tomorrow, because she thinks that I might not
believe that I will run in the marathon tomorrow. As discussed in footnote 22, this is disputable.
24
Suppose that a person could somehow be induced to intend to ϕ (say, to raise her hand), while
being fooled into believing that the defeating condition obtains. The account predicts that the agent
will raise her hand while disclaiming the intention to do so. Cases of hypnosis arguably fit this model
nicely (Dienes and Perner 2007). (The point here is not to advertise a correct prediction, but rather
to give an example of how the theory defended in this book can be empirically tested.)
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. EMOTION 

Jane Austen provides some initial motivation. The title character of Emma
realizes that she is in love with Mr. Knightley, not from evidence about her own
behavior, or by introspection, but from premises about her protégée Harriet
Smith, namely that it is “so much worse that Harriet should be in love with
Mr. Knightley, than with Frank Churchill” and that “the evil [is] so dreadfully
increased by Harriet’s having some hope” that her love will be returned.25
Love is a notorious outlier among emotions (Hamlyn 1978), and a better
example would be one of the five in the movie Inside Out—joy, sadness, fear,
anger, and disgust.26 The choice is clear.
7.5.1 Disgust and the disgusting
When confronted with something disgusting, say a maggot-ridden corpse, we
typically feel disgust. Let us begin with the emotional side of the transaction.
Disgust always has an object. One is disgusted at something—the maggot-
ridden corpse, for example. (In this respect, disgust is like regret and love, and
unlike joy and depression.) Disgust has a characteristic phenomenology: that of
nausea, or queasiness. (In this respect, disgust is like fear and depression, and
unlike love and hate.) Disgust has a characteristic facial expression: the nose
wrinkles, the mouth gapes, and both lips move upwards. (In this respect, disgust
is like embarrassment and surprise, and unlike shame and love.) This so-called
“disgust face” appears to be readily identifiable as such across a wide range of
cultures.27 Disgust is associated with a characteristic kind of behavior: physical
and perceptual contact with the disgusting object is avoided. (In this respect,
disgust is like fear and surprise, and unlike guilt and happiness.) Disgust is a kind
of negative reaction: the object of disgust is experienced as bad in some way. (In
this respect, disgust is like shame and jealousy, and unlike pride and relief.)
Disgust therefore satisfies what is often taken to be the core stereotype of an
emotion: an intentional object, characteristic phenomenology, facial expression,
behavior, and “valence”—a positive or negative reaction. Disgust often appears
on psychologists’ lists of “basic” emotions; it certainly appears to be a “universal”
human emotion (unlike, perhaps, the Japanese emotion of “indulgent dependency,”
or “amae”28). It is unclear whether non-human animals feel disgust—even monkeys

25
The relevant passage is quoted in Wright 2000: 15, although Wright does not interpret it in
this way.
26
These five plus surprise are Ekman’s original “basic emotions” (since expanded: Ekman 1999).
27
An early cross-cultural study is Ekman and Friesen 1971. Recent work indicates that there are
significant cultural effects on the recognition of emotional expressions (see, e.g., Jack et al. 2009).
28
See the classic but controversial Doi 1981.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

or apes. And the emotion takes some time to develop in humans: children do not
show disgust until between about four and eight years (in the United States).29
Although disgust always has an object, and the object is always experienced as
bad in some way, the other characteristics just mentioned are only typically
associated with disgust, and are not even jointly sufficient for disgust. One may
feel disgust without any nausea (think of being mildly disgusted by some road-
kill). One may feel nausea without disgust (seasickness, for example). One may
feel disgust without betraying it in one’s facial expression. One may (involuntar-
ily) make the “disgust face” without being disgusted (as when tasting something
very bitter, say uncooked rhubarb). One may feel disgust without avoiding the
object of disgust. Indeed, the disgusting in many cases exerts a positive attraction,
as horror movies testify.30 Finally, it is possible simultaneously to feel nausea, to
make the disgust face, and to avoid perceptual and physical contact with an
object, without feeling disgust (imagine tasting uncooked rhubarb, and not
wanting the slightest reminder of the experience).
The emotion of disgust is not distaste, although there are affinities. (‘Disgust’
and the French ‘dégoût’ have the same Latin root, meaning distaste, and of course
that is one standard sense of ‘disgust.’) The “disgust face” suggests that disgust is
connected with the rejection of food, and some disgusting objects (notably rotten
meat and feces) pose a genuine threat of illness to humans.31 Further, eating a
disgusting object is typically more disgusting than the object itself. According to a
highly influential theory due to the psychologists Rozin and Fallon, the connec-
tion between disgust and food rejection is extremely close. They propose the
following account of disgust, which “isolates the core and origin of the emotion.”
“Core” disgust is:
Revulsion at the prospect of (oral) incorporation of an offensive object. The offensive
objects are contaminants; that is, if they even briefly contact an acceptable food, they tend
to render that food unacceptable. (1987: 23)

Saying that the “core” of disgust involves oral incorporation could mislead,
because Rozin and Fallon are not denying that disgust is felt when the prospect

29
Toilet training seems to be important to the development of disgust, but as Rozin et al. put it,
“Given the centrality of toilet training in psychoanalytic theory, and the fact that toilet training is one
of the earliest arenas for socialization, it is surprising how little is known about the process” (Rozin
et al. 2000: 646).
30
Miller calls this peculiar fascination the “central paradox of disgust” (1997: 108); it is included
in Carroll’s somewhat broader “paradox of horror” (1990: ch. 4).
31
Although not to all animals, of course. Some animals thrive on carrion, and eating feces
(coprophagy) is fairly common (see Rozin and Fallon 1987: 33).
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. EMOTION 

of oral incorporation seems irrelevant. For example, one might be disgusted


at the lack of a person’s bodily hygiene, or disgusted by touching a slug, or
disgusted by some bodily deformity. Of course, touching a slug might make the
prospect of eating it particularly vivid, but this is not why touching it is disgust-
ing. Rather, the basic claim, as Rozin put it later, is that the “course of biological
and cultural evolution of disgust” had its origin in core disgust, and that
“oral rejection remains an organizing principle of disgust reactions” (Rozin
et al. 2000: 644).
Although the emphasis on oral incorporation may be disputed, Rozin and
Fallon’s account does bring out an important feature of disgust’s companion
property, disgustingness. Disgusting objects are contaminated or polluted, and
can transmit their disgustingness by contact. Soup becomes disgusting when a
fly falls into it, or when the cook spits in it. Stirring soup with a used comb
renders the soup disgusting by a chain of contact, from the head to the comb,
and the comb to the soup. The soup itself might then contaminate other objects
in its turn. Although a subject’s rationalization for feeling disgust on such
occasions might well be that there is an increased risk of infection or illness,
this does not appear to be the real reason. In one of Rozin’s experiments,
subjects refused drinks dipped with a cockroach they knew to be sterilized
(Rozin et al. 1986; see also Fallon et al. 1984). And sometimes an object will
elicit disgust simply because of its superficial similarity to an otherwise quite
different disgusting object. Not surprisingly, Rozin found a lack of enthusiasm
among subjects for eating chocolate fudge shaped like dog feces (Rozin et al.
1986). Disgusting objects, it appears, can contaminate by similarity as well as
by contact.
As already indicated, animals are the main suppliers of the paradigmatically
disgusting—the animal’s glistening slimy viscera, its oozing secretions, regurgi-
tated food, and putrid wastes. Body products, like mucus, sweat, earwax, and—in
particular—feces and menstrual blood, are invariably disgusting, with tears being
the notable exception. One’s own body products often seem to become disgusting
only when they leave the body, as is shown by Allport’s (1955: 43) famous
thought experiment: swallowing the saliva inside one’s mouth is not disgusting,
but drinking from a glass of one’s own saliva is.
When an animal itself either resembles some disgusting bodily part or feeds off
disgusting parts, it is disgusting (for example, vultures, rats, maggots, and
worms). Some people are disgusting—or at any rate are regarded as such—
because their occupation brings them into contact with disgusting things. Some
people are regarded as disgusting for other, more sinister reasons—for instance,
Jews, homosexuals (especially males), the working class, the Dalits or
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

untouchables of South Asia.32 Interpersonal disgust significantly shapes political


and social hierarchies.
Humans have a love–hate relationship with other animals: so many are edible,
or have edible parts, but so few are eaten. In the United States, disgust at eating
kidneys, snails, blood pudding, and sushi is not at all unusual. On the other hand,
the opportunity to tear a dead lobster limb from limb and suck out its flesh is
highly prized—or not, if one abides by Leviticus.
Disgustingness, at any rate in the central sense that is our concern here, is not a
moral property, and disgust is not a moral emotion. Drinking one’s own saliva or
touching a slug may be disgusting, but they are perfectly harmless and morally
permissible activities.
Still, there is certainly a moral use of ‘disgusting’ and ‘disgust’ that is related to
the more central sense: rape, torture, child abuse, genocide, and other acts
concerning sex or violence are often morally condemned in the vocabulary of
disgust. This semantic phenomenon is not confined to English, and appears to be
widespread across languages (Rozin et al. 2000: 643). Further, the disgusting
sometimes provides a model, albeit perhaps a highly suspect one, for morality.
Sinfulness, for example, is importantly analogous to disgustingness. The sinner is
unclean, contaminated by sin, and the chain of transmission may reach back long
before his birth, to Adam and Eve.

7.5.2 DIS and transparency


Let us wheel in the maggot-ridden corpse again. I am disgusted by it, or feel
disgust at it. The emotion has its usual accompaniments: I feel queasy and make
the disgust face. (Although I may well also back away, to keep things simple we
may suppose that I am at a safe distance, and do not move.) In such a situation
I can easily know that I am disgusted by the corpse.33 How do I know that?
The obvious suggestion is that I use the sensational and behavioral cues just
mentioned. I conclude that I am disgusted by the corpse because I feel queasy and
make a distinctive facial expression. The epistemology of nausea is a special case
of the epistemology of sensation, and as argued in Chapter 6 can be accommo-
dated by the transparency procedure. The epistemology of one’s own involuntary
facial expressions is a special case of proprioception, and may be taken for
granted here. This suggestion promises to explain the peculiar access we have
to our feelings of disgust, and is clearly economical. There is an indirect

32
On the working class and disgust, see Miller 1998: chs. 9, 10.
33
‘I am disgusted by the corpse’ has a reading on which it does not entail that I am currently
feeling disgust (as in ‘I am disgusted by spitting’). That is not the relevant reading here.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. EMOTION 

connection with transparency, namely the method used to know one’s sensations,
but the account itself is not transparent. The inference is from (allegedly) good
evidence, of the sort I might have in the case of another person, and so this is a
version of Ryleanism.
Since all accounts of self-knowledge must acknowledge a helping hand from
Ryle, the obvious suggestion is not at odds with the project of this book. However,
there is an elephant (or rather, corpse) in the room. The cues have nothing to do
with the corpse: nausea and facial expressions do not have objects. Perhaps they
might be able to support the conclusion that I feel disgust at something or other,
but I know something more specific, namely that I feel disgust at the corpse.
An extra cue is needed, one that implicates the corpse. (Perhaps backing away
from the corpse would do, but we have stipulated that away.34) What about
wanting not to touch it, or something along those lines? That does not do the
trick, for at least two reasons. First, there is an undergeneration problem. Suppose
this is a case where the disgusting exerts a horrible magnetism: I want to poke the
cold, maggot-seething flesh. I do not believe that I want not to touch the corpse,
and yet that would not prevent me from knowing that I am disgusted by it.
Second, an overgeneration problem: suppose the corpse is lying on a stainless-
steel gurney. I do not want to touch the gurney for fear of contamination, and
I know this. I do not, however, conclude that I am disgusted by the gurney, and
indeed I am not.
For something better, note that knowledge of causal relations in one’s envir-
onment is often easy to come by. Perception frequently supplies knowledge of
collisions, breakings, squashings, and so forth. More specifically, it supplies
knowledge of how the environment affects oneself. For example, I can know
that a lamp is warming my skin (my skin feels hotter as it gets closer to the lamp)
and that a fly makes me blink (the looming fly is immediately followed by the
blinking). Similarly, I can know that my reactions are caused by the corpse (they
appear as soon as I see it). What if this piece of evidence were added? On this
revised suggestion, I know that I am disgusted by the corpse because I know not
just that I have certain reactions, but that they were caused by the corpse. (This
revision just strengthens the evidence that I am disgusted by the corpse, so the
account remains non-transparent.)
There may still be a problem of overgeneration. I do not think that I am
disgusted by the metal gurney, despite seeing it as soon as I see the corpse. But

34
In fact, that wouldn’t help secure the conclusion that I am disgusted by the corpse: if I back
away from the corpse, I also back away from the far wall, but of course I do not think I am disgusted
by the far wall.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

what tips me off that the corpse is responsible for my reactions, and not the
gurney? More decisively, on tasting uncooked rhubarb I might know the rhubarb
is responsible for my feeling of nausea, disgust face, desire not to ingest any more,
and so on. Yet I am not inclined to think I am disgusted by the rhubarb.
The missing cue is staring me in the face. Unlike the gurney and the rhubarb,
the corpse is disgusting. It is that fact that enables me to know that the corpse, and
not the gurney, is responsible for my disgust reactions. So this suggests the
following rule:
DIS If x is disgusting, and produces disgust reactions in you, believe you feel
disgust at x.
Is DIS a transparent rule? One might think not, because the fact that x is
disgusting and produces disgust reactions in Fred is pretty good evidence that
Fred feels disgust at x. But notice that for DIS to produce safe beliefs, it is not
necessary that x is in fact disgusting. Suppose it is not, but I nonetheless think it is
and know that it is producing disgust reactions in me—then almost invariably
I will feel disgust at x. Admittedly, if I am also wrong about the disgust reactions,
then trying to follow DIS will lead to error, but this will be a rare occurrence. Like
DES and INT, as discussed in sections 7.3 and 7.4, DIS is strongly practically self-
verifying, and the beliefs produced by following it will typically be safe. DIS thus
has the characteristic signature of transparency: it can generate unsupported self-
knowledge by an inference from a worldly counterpart premise.

7.5.3 Circularity
The obvious objection is similar to the one for DES in section 7.3.1, although here
it might seem more potent. The objection presumes this common view of the
relation between disgust and its companion property:
The disgustingness of rotten eggs, for example, is a secondary property: it consists only in
the eggs’ capacity to provoke a sensation of disgust in most or normal people.
(Dworkin 2011: 58–9)

Something similar is hinted at in this passage from McDowell:


Consider the confused notion that disgustingness is a property that some things have
intrinsically or absolutely, independently of their relations to us—a property of which our
feelings of disgust constitute a kind of perception. That this notion is confused is of course
no reason to suppose it cannot be true that something is disgusting. (1988: 1)

The common view may be put as a familiar “response-dependent” biconditional:


OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. EMOTION 

D x is disgusting iff x is disposed to produce disgust in normal subjects in


normal conditions.35
A biconditional like D is naturally read as explaining or defining the left-hand
side in terms of the right-hand side. We may thus take the proponent of D to hold
that the response, the emotion of disgust, is familiar and unproblematic, while its
companion property of disgustingness very much needs an explanatory account.
And, the proponent will insist, unless disgustingness can be domesticated by
something along the lines of D, we will be forced to recognize, in Mackie’s phrase,
a “non-natural quality of foulness” (Mackie 1977: 42).
For a (typical) proponent of D, there is an epistemic asymmetry between the
left- and right-hand sides; specifically, our knowledge of what things are disgust-
ing is derived from our knowledge of the responses they produce in normal
people.36 Suppose I have no prior knowledge of the effects of maggot-ridden
corpses on me or others. Nonetheless, when I see the corpse I can easily know
that it is disgusting. How do I know that? For the proponent of D there is a clear
answer. I know that I feel disgust, that the cause is the corpse, and that I am a
normal person in normal conditions. Given D, I can know that the corpse is
disgusting by inference. Now the difficulty for DIS is apparent—it gets the
epistemology back to front. I can only know that I feel disgust at the corpse by
following DIS if I can know the corpse is disgusting without knowing that I feel
disgust. And according to the proponent of D, that is exactly what I can’t do:
knowledge of disgust comes first.
For this objection to get off the ground, not only does D need to be true, but the
feeling of disgust needs to be recognizable independently of any capacity to
recognize the disgusting. Reassurance on this point can be supplied if the feeling
of disgust does not itself intimately involve its companion property; specifically, if
it does not involve, in some intuitive sense, the presentation or appearance of
disgustingness.
But this is implausible. When I see the corpse, and feel disgust, I am respond-
ing to the way the corpse appears: the corpse is impressed upon me as disgusting,
and that is why I feel disgust. Contrast nausea, which is easily confused with the

35
Since the biconditional is supposed to express what disgustingness is, its proponents will take it
to be necessary, and so let us take the ‘Necessarily’ prefix (and the universal quantifier) to be tacit.
The right-hand side should be read generically, to allow for some exceptions.
36
If the right-hand side is taken to supply a synonym of the left, then there is no asymmetry,
‘derived from’ can be deleted from the previous sentence in the text, and the present objection goes
through all the more clearly. But the strong claim of synonymy is not needed.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

emotion of disgust. (The OED gives ‘nausea’ as one definition of ‘disgust.’) But
they are quite different. To feel nausea is not to suffer an emotion, but is simply to
feel queasy or sick. There is no “object” of nausea, although of course an episode
of nausea will have particular causes. If I eat a bad but entirely innocuous-
seeming oyster and feel nausea, the oyster is not impressed upon me as nause-
ating. Because the feeling of nausea does not involve the appearance of its
companion property, there is no barrier to supposing that the feeling of nausea
is recognizable independently of any capacity to recognize the nauseating.
D, then, must be understood non-reductively, somewhat like the familiar
“secondary quality” account of color:
B x is blue iff x is disposed to look blue to normal subjects in normal
conditions.
According to B, the response constitutively diagnostic of blue objects explicitly
adverts to the color blue itself—it is looking blue. For that reason, B leaves it
entirely open whether one can know that x looks blue without the capacity to
know that x is blue. Perhaps it is the other way round: indeed, something along
these lines was defended in Chapter 6 (section 6.2.10). We are in essentially the
same position with D. Although its non-reductive character is less explicit, the
moral is the same: it would be rash to assume that the left-hand side is epistem-
ically dependent on the right.37
The point so far has been not that D is false, but that it does not underwrite the
epistemic asymmetry that the present objection needs. Let us now cast some
doubt on D directly.
What is it for something to be nauseating? As just discussed, the property
cannot be singled out by saying that it is presented in the feeling of nausea. There
is an obvious candidate, though, suggested by the OED definition “that causes
nausea.” That is, the analog of D is surely along the right lines:
N x is nauseating iff x is disposed to produce nausea in normal subjects in
normal conditions.
However, in the case of disgust, the emotion itself introduces us to disgustingness.
There should thus be no temptation to endorse D for fear that otherwise disgust’s
companion property would remain elusive.
As mentioned in section 7.5.2, disgusting objects can render other objects
disgusting by contact. This is a problem for the right-to-left direction of D.
A certain paperclip on my desk does not produce feelings of disgust in normal

37
Note that the Rozin and Fallon account of core disgust, quoted in section 7.5.1, is not reductive:
‘offensive’ is evidently supposed to be interchangeable with ‘disgusting.’
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

. SUMMARY 

people in normal circumstances. According to D, it is not disgusting. But, really,


who knows where it’s been? The revolting possibilities can only be obliquely
hinted at in a book intended for family reading by the fireside. Alternatively, we
may imagine the paperclip to be infested with tiny grotesque wriggling grubs,
revealed by microscopic scrutiny. We may even imagine that the grubs would
defy detection by any human instruments. Like everything else, the disgusting
may not be entirely within our ken. The attempt to repair D by ramping up the
powers of the relevant subjects and increasing the favorability of the conditions
is—if the history of response-dependent “analyses” is any guide—a degenerating
research program and is in any case poorly motivated.
A realistic model cockroach casts some suspicion on the converse direction.
Suppose that normal people in normal conditions find it disgusting. But, one
might protest, the model cockroach is entirely sterile and made of medical-grade
plastic—it’s not really disgusting, any more than lifelike fake sushi is really
nutritious. Similarly, if normal people in normal conditions are disgusted by a
family cooking and eating their dog after its accidental death on the road,38 one
might protest that this emotional reaction is inappropriate, because eating the
dog is no more disgusting than a farmer eating his chickens. The point is not to
adjudicate these disputes, but simply to note that they are disputes. Our feelings
of disgust are at best imperfectly correlated with what is in fact disgusting. That is
suggested by our practice, which is in no tension with the scientific study of the
emotion and its evolutionary origins. D is far from a tacitly acknowledged
platitude; rather, it is the product of an overly simplistic picture of disgust and
its companion property.
The circularity objection to DIS, then, turns out to be just as impotent as the
similar objection to DES. One might expect some complications due to defeas-
ibility, but the emotion of disgust, at least, poses no obvious problem for the
transparency approach.

7.6 Summary: privileged and peculiar access,


economy and detectivism
The three rules for desire, intention, and disgust explain privileged and peculiar
access in the usual style. Privileged access is explained because the rules are
practically strongly self-verifying: minor qualifications aside, if one tries to follow
DES, INT, or DIS, then one will arrive at a true belief about one’s desire, intention,
or feeling of disgust. (The minor qualifications will vary: in the case of INT, for

38
An example from Haidt et al. 1993.
OUP CORRECTED PROOF – FINAL, 10/3/2018, SPi

 DESIRE , INTENTION , AND EMOTION

example, they will involve awareness of the defeating condition noted in section
7.4.1; accordingly, the degree of strong self-verification will also vary.) And
peculiar access is explained because the methods only work, or only work in
full generality, in one’s own case. Detectivism can readily be checked, and
considerations similar to those in section 5.4 make economy plausible.
The plausible reach of the transparency account is much larger than is
commonly assumed. By the end of Chapter 8, the next and final chapter, the
case for total hegemony will be complete.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

8
Memory, Imagination,
and Thought

[W]hen we wish to remember anything which we have seen, or heard, or


thought in our own minds, we hold the wax to the perceptions and thoughts,
and in that material receive the impression of them as from the seal of a ring;
and that we remember and know what is imprinted as long as the image
lasts; but when the image is effaced, or cannot be taken, then we forget and
do not know.
Plato, Theaetetus

Plato said that in thinking the soul is talking to itself. But silence, though
often convenient, is inessential, as is the restriction of the audience to one
recipient.
Ryle, The Concept of Mind

8.1 Introduction
Knowledge that one remembers might seem to have already been covered. To
remember that p is simply to preserve one’s knowledge that p. Knowing that one
remembers that p is thus a special case of knowing that one knows that p,
discussed in Chapter 5. There the proposal was that one may know that one
knows that p by following the rule:
KNOW If p, believe that you know that p.
One remembers that p just in case one knows that p and has not currently
acquired knowledge that p. So knowledge that one remembers that p can be
obtained by following the rule:
REMEMBER If p, believe that you remember that p,
with the following defeater: you have just now acquired knowledge that p.1

1
To apply the defeater one needs to know that one previously didn’t know that p, which will
require the resources discussed in section 5.5.2.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

But this only covers the case of (so-called) “semantic” or “factual” memory.
There is another important kind of memory which raises more challenging
issues, namely episodic memory. Knowledge that one episodically remembers is
the first topic of this chapter; as will become clear shortly, there is a close
connection between episodic memory and the remaining two topics, imagination
and thought.2

8.2 Memory
Long-term memory is often divided into two basic kinds, “declarative” (or
“explicit”) and “non-declarative” (or “implicit”), with declarative memory given
a gloss along the lines of “conscious recollections of facts and events” (Squire
1992: 232). Declarative memory itself is divided into “semantic” and “episodic”;
non-declarative memory covers a grab-bag of other learning abilities, but the
central category is memory of skills or habits, or “procedural” memory.3
Although in psychology “[t]he recognition that there are multiple forms of
memory developed beginning in the 1980s” (Squire 1992: 232), ordinary lan-
guage was there first.4 Compare:
(1) Fred remembers that eggs have yellow yolks/that he saw an egg.
(2) Fred remembers seeing an egg/eating an egg/being hit by an egg.
(3) Fred remembers how to boil an egg/how to imagine an egg breaking.
Sentences like (1), where ‘remembers’ takes a sentential complement, are
typically (but not exclusively) used as reports of semantic memory. (2), with a
gerundival complement, is used as a report of episodic memory, and (3), with an
infinitival complement, as a report of procedural memory.
The topic is knowledge of one’s episodic memories. The linguistic facts just
mentioned may reassure us that this is a category worth distinguishing, but what
exactly is episodic memory, and how does it differ from semantic memory? The
neuroscientist Endel Tulving (who introduced the ‘episodic’ label in his 1972)
informally puts it this way: episodic memory is “memory for personally experienced
events” or “remembering what happened where and when” (2001: 1506). More
poetically, episodic memory “makes possible mental time travel through subjective
time, from the present to the past, thus allowing one to re-experience . . . one’s own

2
As with Chapter 7, detectivism and economy will not be explicitly defended.
3
See Squire 1992: 233, fig. 1. The suggestion that procedural memory (“memory-how”) is not
memory for facts is doubtful for the reasons given in Stanley and Williamson 2001.
4
The quotation is actually something of an overstatement; for some history, see Tulving 1983:
17–18.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. MEMORY 

previous experiences” (2002: 5).5 Semantic memory, on the other hand, is mun-
dane by comparison, being “memory for general facts” (2001: 1506), or “general
knowledge of the world” (Baddeley 1997: 4).6
However, these explanations are more suggestive than accurate. Suppose
I was so drunk at the party that I cannot recall dancing with a lampshade on
my head. The next day I learn of this mortifying episode; later I remember what
I learned, that I was dancing at the party in inappropriate headgear. I remember
“a personally experienced event,” or “what happened where and when,” but this
is semantic memory, not episodic. Contrariwise, suppose I have seen many
skunks, and on that basis can recall what skunks look like. When I recall what
skunks look like, I may visualize a prototypical skunk, a perceptual amalgam of
the various skunks I have encountered. Such a memory is best classified (at least
initially) with paradigmatic episodic memories—recalling seeing a skunk in my
garden this morning, for instance. Yet it is not a memory of a “personally
experienced event.” Rather, to know what skunks look like is to possess a piece
of “general knowledge of the world.”
For a final clarificatory distinction, consider the following two pairs:
(a) Bertie remembers that Pope Pius XI was born in Desio.
(b) Bertie is remembering Pius XI’s birthplace.
(a0 ) Bertie remembers meeting the Pope.
(b0 ) Bertie is remembering meeting the Pope.
The a-sentences might be used to truly describe Bertie when he is sound asleep,
with (a) ascribing semantic memory and (a0 ) episodic. The b-sentences, on the
other hand, apply only when Bertie is undergoing a process of recollection. To
circumscribe our topic further, it is knowledge of the process, rather than the
state, of episodic recollection.
8.2.1 The visual world and the visualized world
Section 6.2.6 introduced the notion of the visual world, the world as revealed by
vision. (And, similarly, the olfactory world, the auditory world, the world of pain,
and so on, but following that chapter let us continue to concentrate on vision.)
When I (veridically) see Donald the duck dabbling among the reeds, a segment of
the visual world is revealed—a distinctive visual fact, or v-fact, concerning basic

5
Alternative terminology includes ‘personal memory,’ ‘direct memory,’ ‘event memory,’ and a
host of others.
6
If the episodic/semantic distinction corresponds to two separate memory systems, then double
dissociations would be expected. There is some evidence that these occur: see Tulving 2002: 14 and
Hodges and Graham 2002.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

visible properties like shape, illumination, motion, color, and texture. In the
notation of section 6.2.6, I am aware of the fact that [ . . . x . . . ]V, where
x = Donald.
Later, I might recall what I saw. (To simplify matters, we may suppose that my
recollection is entirely accurate; notoriously, episodic memory has much less
fidelity than we commonly think.7) In recalling Donald dabbling, I have a
“memory image” of Donald dabbling; specifically, a visual memory image.
Phenomenologically, visual memory images are similar to visual images in
general, which is no great surprise since imagery clearly draws on episodic
memory. If asked to imagine (specifically, visualize) a green animal with the
head of a unicorn and the body of a pig, one will draw on one’s past “personal
experiences” of green objects, unicorn-pictures, and pigs. Often the connection
between imagery and memory is even more direct, as when one uses imagery as a
method of real-world discovery—for instance, visualizing how one’s living room
couch might look in the bedroom.
“Visual images” are visual, in some palpably deep but elusive way. Many
independent lines of evidence support the view that visualizing and vision are
intimately related. Here are seven.
First, we ordinarily and naturally speak of visual images. ‘Kopfkino’ (head
cinema), a German expression for visual imagery, is not an example of a funny
foreign idiom—instead, it seems obviously appropriate.
Second, imaging studies show that visualizing and vision overlap substantially
at the neural level. As Ganis et al. put it: “visual imagery and visual perception
draw on most of the same neural machinery” (2004: 226).
Third, there are many interference effects between visualizing and vision,
which are not as pronounced between (say) visualizing and audition (Kosslyn
1994: 54–8).
Fourth, eye movements (when, say, visualizing a swinging pendulum) are
remarkably similar to eye movements in the corresponding case of vision
(Deckert 1964, Laeng et al. 2014).
Fifth, there is the phenomenon of eidetic imagery, in which subjects report
their images to be very much like photographs (Haber 1979).
Sixth, vision can be mistaken for visualizing—the Perky effect (Perky 1910,
Segal 1972). Asked to imagine a tomato while staring at a white screen, subjects
will often fail to notice a faint but clearly visible red round image subsequently

7
See Loftus 1996.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. MEMORY 

projected onto the screen, and will take its size and shape to be features of the
imagined tomato.8
Seventh, visualizing can (arguably) be mistaken for vision (the reverse-Perky
effect). For instance, a cortically blind subject (H.S.) with spared visual imagery
denied she was blind (Anton’s syndrome), which may have “resulted from
a confusion of mental visual images with real percepts” (Goldenberg et al.
1995: 1373):
H . S .: Was given a comb and recognized it from touch . . .
G . G .: Are you really seeing it, or is it only a mental image?
H . S .: I think I am seeing it a little, very weakly . . .
G . G .: What does “weakly” mean?
9
H . S .: It is vague and . . . somehow farther away, blurred. (1378)

What explains the similarity between visualizing (taken to cover both visual
recollection and visual imagination) and vision? In the Humean tradition, it is
explained by the similarity of the images or picture-like items immediately
present to the mind. In Hume’s terminology, the “impression” produced in the
mind by a strawberry, and the “idea” produced in the mind when one visually
recalls the strawberry, or visually imagines a strawberry, “differ only in degree,
not in nature.” Specifically, the idea is a “faint image” of the impression (Hume
1740/1978: I.i.1). So Hume endorses:
SIMILAR IMAGE: the images present in visualizing are of the same kind as those
present in vision—albeit degraded and transformed in various ways.
Which he supports by anticipating the Perky effect:
[I]n sleep, in a fever, in madness, or in any very violent emotions of the soul, our ideas
may approach to our impressions, as, on the other hand, it sometimes happens that our
impressions are so faint and low that we cannot distinguish them from our ideas. (I.i.1)

Despite its undeniable attractions, however, Hume’s view faces a serious objec-
tion. If ‘faintness’ is taken literally (and it is unclear how else to take it), Hume’s
view apparently predicts that visualizing a strawberry is more similar to seeing a
strawberry in dim light than it is to seeing a strawberry in sunshine, but that is
surely incorrect.10

8
The Perky effect does not support the implausible proposition that visualizing a tomato is just
like seeing a tomato in sub-optimal conditions (see Thomas 2016: supplement: the Perky experi-
ment, and the discussion of Hume immediately below).
9
For the reverse-Perky effect at the level of memory, see Intraub and Hoffman 1992.
10
For discussion of related objections, see McGinn 2004: ch. 1.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

The problem is fundamental to Hume’s conception of ideas and impressions as


picture-like. The only available models for understanding these alleged entities
are physical pictures: ideas or impressions of a strawberry are much like physical
pictures of a strawberry (at any rate as the vulgar think of physical pictures),
except that they “arise in the soul.” Hence, for any episode of visualizing or
recalling, it should be in principle possible to create a physical picture of a
strawberry such that viewing the picture in certain conditions exactly reproduces
the felt quality of imagining or recalling. And this is what seems wrong: any way
of degrading the picture, such as blurring, desaturating, dimming, and so on, just
yields another perceptual experience, plainly discernable from imagining or
recalling.
In the framework of this book, Hume’s impressions have been traded for
segments of the visual world, or v-facts; more generally, for ostensible segments
of the visual world, or v-propositions. The corresponding replacement for SIMILAR
IMAGE is therefore:
SIMILAR CONTENT: the content of visualizing is the same kind as the content of
vision—albeit degraded and transformed in various ways.
Recall from section 6.2.7 that degrading is logical weakening, and transforming
corresponds to substituting distinct logical equivalents. (Just how the content of
visualizing is degraded and (perhaps) transformed is a difficult and complex issue
that is beyond the scope of this book.)
SIMILAR CONTENT is of course imprecise, but sufficient to evade the objection to
SIMILAR IMAGE just rehearsed. Consider a picture of a bright-scarlet strawberry,
analogous to an “impression” of a strawberry. The objection was, in effect, that
visualizing a strawberry is evidently different from looking at a degraded copy of
the picture. But that does not impugn SIMILAR CONTENT, because a degradation of
visual information need not be the content of a possible visual perceptual
experience—yet it may still be information of a distinctively visual kind.
(A similar point holds for transformations.) For example, it is not possible to
paint a picture that depicts a strawberry as simply red (that is, of no particular
shade, brightness, or saturation)—‘simply red’ is not found in any paint catalog.
On the other hand, “simply red” information (or misinformation), for instance
the proposition that the strawberry is red, can be bought off the shelf. This is a
toy illustration of how the content of perception could be degraded while
remaining distinctively visual.
Since SIMILAR CONTENT has all of the explanatory virtues of SIMILAR IMAGE, while
evading its most serious drawback, this adds up to a powerful case. The content of
visualizing is distinct from, but similar to, the content of vision. So to the visual
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. MEMORY 

world we may add its ethereal counterpart, the visualized world, the totality of
visualized facts, or v--facts.11
8.2.2 Episodic recollection and transparency
Vision reveals the visual world; episodic memory reveals the visualized world.
There is scarcely daylight between seeing an object in the visual world that one
knows to be a duck, and knowing that one sees a duck. Hence the appeal of the
transparency approach: one can come to know that one sees a duck by a simple
inference from a premise about the visual world. Roughly: the visual world
contains a duck, so I see a duck. More precisely, and as defended in Chapter 6,
one can come to know that one sees a duck by following this rule:
SEE-DUCK If [ . . . x . . . ]V and x is a duck, believe that you see a duck.
Episodic recollection cries out for a parallel treatment. In recollecting Donald the
duck, I have a “memory image” of Donald floating on a pond, dabbling among
the reeds. There is scarcely daylight between activating this knowledge about the
visualized world as it was and knowing that I am recollecting a duck, or
recollecting duck dabbling. Hence the appeal of the transparency approach:
I can come to know that I am recollecting a duck by an inference from the
presence of a duck in the visualized world as it was. How can this be made more
precise?

8.2.3 Knowing that I am recollecting, first pass


A simple idea is to modify SEE-DUCK by swapping out the visual world for the
visualized world. Letting ‘« . . . x . . . »v’ schematically express a visualized fact, or
v--fact, a degradation and transformation of a v-fact, the suggestion is that I can
come to know that I am recollecting a duck by following this rule:
MEM-DUCKi If « . . . x . . . »V and x is a duck, believe that you are recollecting a
duck.
However, this suggestion has a number of instructive defects that show that MEM-
DUCKi is not my route to concluding that I am recollecting a duck. (Whether
MEM-DUCKi could supply knowledge is therefore moot.)
Suppose I am back at the very pond where I first saw Donald. All the ducks
have now left, and frogs are the only visible animals. Staring at the duckless pond,

11
‘Visualized fact’ does have a misleading connotation. Just as visual facts may lie there quietly
and placidly, without being detected by vision, visualized facts need not be recorded in memory or be
the content of any visualizing.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

I may recall how Donald dabbled, among those very reeds at the far bank. My
“memory image” of Donald is somehow “superimposed” on the scene before my
eyes. The image of Donald is spatially located with respect to the things I see—it
is to the left of that branch, and above the lily-pads. The image is also temporally
located with respect to the things I see—it disappears under the surface of the
pond just after that frog hops off a lily-pad. In other words, the proposition that
« . . . x . . . »V concerns the present time, not the past occasion when I witnessed
Donald dabbling.12 The perennially tempting metaphor of (visual) episodic
recollection as replaying a movie “in the head” or “before the mind’s eye” is
thus doubly apt. As noted earlier, it is apt because there is something genuinely
visual about visual recollection, something significant that it has in common with
visual perception. But it is also apt because one’s “memory images” are (or appear to
be) present, just as the replayed movie is present. And because the movie is being
played now, the events depicted appear to be occurring now—the Hindenburg is
exploding in flames, even though the recorded events happened in 1937. As Tulving
says (quoted in section 8.2), episodic memory allows “one to re-experience . . . one’s
own previous experiences.”
As discussed in section 8.2.2, the visual aspect of visual recollection is
accounted for in terms of information: the degraded and transformed propos-
ition that « . . . x . . . »V still bears distinctive marks of the visual world. It is the
second aspect of recollection that poses a problem for MEM-DUCKi. Intuitively,
I come to know that I am recollecting by interrogating the visualized world as it
was, by activating my knowledge of what happened. But the proposition that
« . . . x . . . »V concerns the present, not the past.
This leads straight to another problem. Note that the proposition that
« . . . x . . . »V is not a candidate for knowledge: it is not a v--fact, merely a false
v--proposition. It is only true (we may suppose) if something is now disappear-
ing under the surface of the pond, but (we may further suppose) nothing is. So
since following MEM-DUCKi requires me to know that « . . . x . . . »V, I don’t follow
it. I must, then, try to follow MEM-DUCKi. (Recall from section 5.2.5 that
one tries to follow the rule ‘If C obtains, believe that p’ just in case one
believes that p because one believes that C obtains.) But trying to follow
MEM-DUCKi requires (as does following it) that I believe that « . . . x . . . »V. And
that apparently amounts to belief in a spectral world of “memory images”—the

12
Often episodic recollection is best accomplished with one’s eyes closed, which avoids interfer-
ence effects with vision. But in this sort of case one’s images also seem to be (at least typically)
located in ordinary egocentrically specified space—up, to the left, and so on; they also seem to be
temporally located with respect to (say) sounds one hears.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. MEMORY 

Donald-image seems to move beneath the pond’s surface, yet does not interact
with ordinary matter.
Maybe this problem isn’t so serious. After all, even some sophisticated philo-
sophers have succumbed to the temptation to think that visualizing involves
awareness of “images,” so the idea that I have (less clearly) succumbed should be
taken seriously. Suppose, then, that I do believe that « . . . x . . . »V. What is this
transcendental object = x? It seems to be a queer bird indeed—a duck-wraith,
or perhaps a “mental” duck-picture, whatever that may be. For Hume, these natural
descriptions are not so misleading. But if Hume’s ideas and impressions are replaced
by visual (mis-)information (namely, v--propositions and v-propositions), what is
x? There is only one candidate—it is none other than Donald himself! When, staring
at the duckless pond, I visually recall Donald dabbling among the reeds, the object
that appears to be disappearing under the surface of the pond is Donald, not
a shadowy representative of him. That is, when I “recollect” that « . . . x . . . »V,
x = Donald. So far, this is all to the good—it would sink the transparency approach
if recollecting Donald delivered up awareness of anything less than Donald. Now,
to (try to) follow MEM-DUCKi, I have to believe, not just that « . . . x . . . »V, but also
that x is a duck. And x is indeed a duck—no problem there. However, the naive
thought is that x is a duck-wraith, not a duck. Assuming I believe that « . . . x . . . »V,
I believe that x is disappearing under the surface of the pond without producing any
ripples. My naiveté only extends so far—I do not believe that any flesh-and-blood
duck is doing that!
To summarize: there are three problems with MEM-DUCKi. First, its antecedent
exclusively concerns the present, and so the rule is not an appropriate expression
of the transparency idea. Second, trying to follow the rule has the implausible
requirement that one believes that « . . . x . . . »V. And third, trying to follow the
rule has the even more implausible requirement that one believes that x is a duck.
Let us address these problems in turn.
8.2.4 First problem: putting the past into the antecedent
Given that visual recollection and visual imagination both co-opt the visual
system, which delivers information about one’s present environment, it is no
great surprise that visual imagery is also located in the present. But then its utility
in episodic recollection is puzzling, quite independently of any issues to do with
self-knowledge. What does visualizing Donald dabbling have to do with recol-
lecting how Donald dabbled? What work is the imagistic aspect of visual recol-
lection doing if it concerns the present?
The metaphor of “replaying a movie” suggests an answer. Suppose I am trying
to recollect what happened during my previous visit to the pond. I remember that
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

Donald was there, and that I took a movie of him. Searching my laptop brings up
a video file labeled ‘Donald.mov.’ The movie is of very poor quality, but there are
enough cues to activate a much more detailed recollection of Donald’s dabblings.
Although the movie does not depict Donald as having dabbled, it can help me
recall how he did. A useful aide-mémoire need not contain any explicit hint of
the past.
Although one cannot literally play a movie in the head, visualizing Donald
dabbling will achieve a similar effect. Of course I am only able to visualize Donald
dabbling because I remember Donald dabbling, but that does not mean that the
process is no help at all, like (to borrow Wittgenstein’s example) buying a second
copy of the newspaper to check that a report in the first one is true.13 In the case
of playing Donald.mov, I am only able to find the file because I remember that
Donald was at the pond, but that is clearly consistent with the movie activating
more memory-knowledge of Donald’s doings. To make the analogy with episodic
recollection closer, we can imagine that playing Donald.mov activates knowledge
that a second camera was recording at the scene from a different position; a
search uncovers a second file, Donald-2.mov. Playing that activates yet more
knowledge of the past scene, including the presence of a third camera, and so on.
The process of episodic memory is often a similar virtuous feedback loop, leading
to new and improved imagery, and thence to better retrieval, as in the famous
“madeleine” passage from Remembrance of Things Past.14
We are now in a position to solve the first problem. What MEM-DUCKi leaves
out are recollected facts about the past that are prompted by one’s present
imagery of Donald’s dabblings. Although it is not true that « . . . x . . . »V, it is
true that it was the case that « . . . x . . . »V. Hence it is true that it was the case that
∃α « . . . α . . . »V—put more intuitively, something was that way (demonstrating
the ostensible visualized scene). (For simplicity, assume that x is the only object in
the visualized scene.) So if we waive the second problem, and ignore the third by
scaling back our ambitions, instead merely trying to explain how one knows that
one is recollecting something or other, we have reached this revised rule:
MEM-___ii If « . . . x . . . »V and something was thatx way, believe that you are
recollecting.
Here ‘thatx way’ picks out the way that the proposition that « . . . x . . . »V charac-
terizes x—dabbling among the reeds, say. MEM-___ii puts the past in the

13
Wittgenstein 1958: §265.
14
For the connections between the passage and experimental work, see Jellinek 2004.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. MEMORY 

antecedent, with the desired result that in trying to follow the rule, one is
interrogating the visualized world as it was.
8.2.5 Second and third problems: belief in images, but not ducks
Section 6.2.10 made the case for belief-dependence, that vision constitutively
involves belief in the relevant v-proposition. To briefly recap: (1) belief-
dependence explains the sense in which perception compels belief, and why
belief and perception have the same direction of fit; (2) it fits nicely with the
plausible idea that some animals with visual systems very similar to ours cannot
cognitively override visual illusions; (3) it explains the appeal of the sense datum
theory; and (4) it involves a psychological mechanism that is arguably present
anyway, in cases of delusion.
If perception constitutively involves belief, then given the close kinship
between perception and imagery, it is only to be expected that imagery does
too. And if that’s right, what would we take the objects of the visualized world to
be? They are somehow like the familiar denizens of the visual world, ducks, frogs,
lily-pads, and so forth; yet they are somehow unlike them. They are shadowy,
insubstantial, ghostly—in that respect they are like faint pictures of the real thing.
Unlike ducks and the rest, they appear (at least sometimes) to be creatures of the
will—another point of similarity with pictures. In fact, they are perfectly ordinary
ducks and frogs, enveloped in the fog of memory. It would thus be understand-
able to mistake them for (something like) images of ducks and frogs, and that is
exactly what we find. And not just among the naive and the occasional sophis-
ticated philosopher—some psychologists who work on mental imagery clearly
think that in visualizing one is aware of picture-like entities.15
The thesis that perception constitutively involves belief stands or falls with the
thesis that imagery constitutively involves belief. The direction of argument has
been from the first thesis to the second, but since it is independently quite
plausible that people generally harbor harmless delusions about a shadowy
world of images, one could also argue in the reverse direction.
Finally, we need to restore the duck. And MEM-___ii suggests how to do it
without committing me to believing that x is a duck. I may not believe that, but if
my memory is in good order, I do know something stronger than that something

15
See, for example, Kosslyn et al. 2006: 48–9. Kosslyn is on the “pictorial” side of the “imagery
debate” (see Block 1983, Tye 1991), but that is about whether the representations underlying
imagery are (somehow) more like pictures than sentences. It is not a debate about whether we are
aware of pictures when imagining.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

or other dabbled in such-and-such manner. Namely, I know that a duck was—


demonstrating the ostensible visualized scene—that way.
8.2.6 Second pass: MEM-DUCK
Putting the pieces of the last two sections together, we have arrived at this rule:
MEM-DUCK If « . . . x . . . »V and a duck was thatx way, believe that you are
recollecting a duck.
Even converted to a schematic rule by replacing ‘duck’ with a schematic letter,
this of course does not cover every case, for instance my knowledge that I am
recollecting an object doing such-and-such, for instance recollecting a duck
dabbling. But an extension is straightforward:
MEM-DUCK-DAB If « . . . x . . . »V and a duck dabbling was thatx way, believe
that you are recollecting a duck dabbling.
However, for simplicity let us concentrate on MEM-DUCK. As with SEE-DUCK, this
rule offers an explanation of peculiar access. I cannot find out that you see a duck
by following a third-person version of SEE-DUCK; likewise, I cannot find out that
you are recollecting a duck by trying to follow a third-person version of MEM-
DUCK.
What about privileged access? As with access to our states of seeing, access to
our rememberings is not especially noteworthy. One can easily be mistaken about
what one sees (or even whether one sees anything at all); likewise, one can easily
be mistaken about what one is recollecting (or even whether one is recollecting
anything at all). Still, will trying to follow MEM-DUCK tend to yield knowledge of
the consequent, at least in favorable circumstances? If I believe that that
« . . . x . . . »V and a duck was thatx way, and this belief is inferentially active,
then—at least in a central class of cases where my memory is functioning
well—this will be because an episodic memory of an encounter with a duck has
been brought to mind. Hence, if I conclude from this premise that I am recol-
lecting a duck, my conclusion will be true. Like SEE, MEM-DUCK is practically self-
verifying. My belief in the conclusion is reliably based, and arguably safe, so there
is no evident barrier to taking it to be knowledge.
Interestingly, for MEM-DUCK to yield knowledge, it is not necessary that I know
that a duck was thatx way. Suppose I did see Donald dabbling, and remembered
the encounter. However, over time my memory has become distorted—when
visualizing Donald’s doings he appears on the left bank and sporting some blue
feathers, but in fact he was on the right and a dull brown. It is false that a duck was
thatx way, although true that I am recollecting a duck. My error notwithstanding,
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. IMAGINATION AND IMAG - DUCK 

my belief that I am recollecting a duck may be as reliably based as it is in a wholly


veridical case. (For similar reasons, I can know that I see a duck even though the
duck illusorily looks to be on the left and to have blue feathers.)

8.3 Imagination and IMAG-DUCK


As just argued, I can know that I am recollecting a duck by trying to follow MEM-
DUCK; amending the consequent to ‘believe that you are imagining a duck’ also
provides a way of knowing that I am imagining a duck. (In recalling Donald I am
also imagining him.) But sometimes I simply know that I am imagining a
duck while remaining neutral on whether I am also recollecting a duck. How
do I know that?
Intuitively, the image provides the crucial clue. If I am ostensibly aware of a
purple-kangaroo image, then I am not imagining a duck.16 On the other hand, if
I am ostensibly aware of a duck-image then presumably I am imagining a duck.
Informally: if there’s a duck-image in the visualized world, then I am imagining a
duck. More precisely:
IMAG-DUCKi If « . . . x . . . »V and x is a duck-image, believe that you are imagin-
ing a duck.
What is a duck-image? One natural answer is that it is something with (a suitable
selection of) the visible features of (paradigmatic) ducks, something with a
distinctive duck-look, or duck-gestalt, as filtered through the degrading and
transforming lens of memory. Some mental images appear to have that duck-
gestalt; those are the duck-images. On that understanding of ‘duck-image,’ duck-
images are the same as realistic-decoy-duck-images, or realistic-toy-duck-images,
and so on. IMAG-DUCKi then runs into the problem that if I believe that
« . . . x . . . »V and that x is a duck-image, it is entirely open whether I am imagining
a duck, a decoy duck, a toy duck, or something else with the visible properties of
ducks. (Adding that I know that x is a duck-image clearly doesn’t help.) And that
is to say IMAG-DUCKi is not knowledge-conducive—the rule is not a good one.
Could an appeal to the will help? If I know I intend to imagine a duck now,
then the presence of a duck-image in the present visualized world is presumably a
good indication that I have succeeded. Of course this would require a transparent
account of how I know my intentions, but that was given in Chapter 7. However,

16
Well, not quite. If ‘imagining a duck’ is understood de re, as ‘imagining an object x, which is in
fact a duck,’ and if I am improbably acquainted with a peculiar duck that looks like a purple
kangaroo, then I may well have succeeded in imagining a duck.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

this can only be a partial solution, since not all imagery is voluntary, and one can
know what one imagines even in cases where images come unbidden. (At any
rate, one certainly has an opinion—even if that doesn’t amount to knowledge, we
still need an account of how it arises.)
We need another interpretation of ‘duck-image,’ one that makes duck-images
distinct from decoy-duck-images. A picture of a duck is not thereby a picture of a
decoy duck, which suggests that we should read ‘duck-image’ as ‘image that
depicts a duck,’ or something along those lines. But now one might worry that
this is getting a little too sophisticated. The whole project of this book is to
explain how we have self-knowledge, not how we could get it if we possessed
fancy concepts from theories of pictorial representation.
Fortunately the resources needed are conspicuously ordinary. If someone is
shown a picture of a duck and asked ‘What is it?’, the typical reply will be ‘That’s a
duck.’ Very young children will give this answer. Clearly there is no confusion
between the duck-picture and any duck, even on the part of children. As Kripke
observed, the demonstrative refers to the depicted duck, not to the picture itself—
one wouldn’t say ‘That’s a duck and made of paper and ink.’17 We can use this
common and familiar reading of ‘That’s a duck,’ and put an improved version of
IMAG-DUCKi thus:
IMAG-DUCK If « . . . x . . . »V and thatx is a duck, believe that you are imagining
a duck.
Here ‘thatx’ picks out whatever is depicted by x, just as the demonstrative in ‘That
is a duck’ (pointing to a picture) picks out whatever is depicted by the picture.
Imagining a duck, whether voluntarily or involuntarily, involves the activation
of duck-memories of one sort or another (perhaps just memories gained by
reading about ducks). Similarly, imagining decoy ducks involves the activation
of decoy-duck memories (perhaps just memories gained by reading about ducks
and reading about animal decoys). If I spontaneously believe that thatx is a duck
when ostensibly confronted by the visualized fact that « . . . x . . . » V, presumably
duck-memories were more likely to have underwritten my visualizing than
decoy-duck memories, with mechanisms of association producing the belief
that thatx is a duck. Plausibly, following IMAG-DUCK (or trying to) is
knowledge-conducive.
What about knowing that I am merely imagining a duck, with no accompany-
ing recollection?

17
See Kripke 2011: 346.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. IMAGINATION AND IMAG - DUCK 

According to Hume:
A man may indulge his fancy in feigning any past scene of adventures; nor wou’d there be
any possibility of distinguishing this from a remembrance of a like kind, were not the ideas
of the imagination fainter and more obscure. (1740/1978: I.iii.5; see also Appendix, 627–8)

Thus, if I am aware of an image of a duck dabbling that is “somewhat


intermediate betwixt an impression and an idea” (I.i.3), then probably I am
recollecting a duck, or at least seeming to. On the other hand, if I am aware of
an image of a duck dabbling that is more toward the faint and obscure end of
the spectrum, probably I am merely imagining a duck. Even granted Hume’s
apparatus of ideas and impressions, this is not very plausible (Holland 1954:
465–6). In the present information-based framework, the counterpart of
Hume’s suggestion is that memory and mere imagination may be distinguished
(that is, told apart) by their content. But, as section 8.2.2 in effect observed, this
is equally implausible.
Sometimes mere imagination is easy to detect. If I know that I am imagining a
griffin, or a green animal with the head of a unicorn and the body of a pig, then it
is a short step from this to knowing that I am merely imagining such things, since
I know that there are no such fabulous animals to remember. (No doubt
imagining a griffin involves memories of pictures of griffins, but remembering
a griffin-picture is not to remember a griffin.) But in many cases it is quite
difficult. Suppose I know I am imagining a duck. Memories concerning ducks are
of course involved here, but I might reasonably conjecture that there is no
particular duck of my acquaintance that I am imagining. Who knows, though?
Perhaps I am actually imagining Donald (and so also recollecting him). The fact
that the imagined duck is wearing shorts and a hat—which Donald, being a
perfectly ordinary duck, wouldn’t be caught dead in—is not probative, since one
can readily imagine an object to be a way it isn’t.
There is a (typically) more significant class of cases, where the issue is whether
I am merely imagining an object doing such-and-such, or being some way—for
example, merely imagining Donald dabbling, or being brown, on that occasion
back at the pond. Let’s grant that it’s Donald I am recollecting—am I recollecting
Donald dabbling or merely imagining it? Sometimes this sort of question can
have life-changing consequences, as when I wonder whether I am really recol-
lecting my kindergarten teacher conducting a satanic ritual.
To get to a plausible candidate for the relevant rule, let us first amend IMAG-
DUCK so it covers imagining a duck dabbling:
IMAG-DUCK-DAB If « . . . x . . . »V and thatx is a duck dabbling, believe that you
are imagining a duck dabbling.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

Then we can amend this rule to cover merely imagining a duck dabbling by
adding to the antecedent that the visualized occurrences didn’t happen to a duck
dabbling, redeploying some material from MEM-DUCK-DAB (section 8.2.6):
MERE-IMAG-DUCK-DAB If « . . . x . . . »V and thatx is a duck dabbling and a duck
dabbling wasn’t thatx way, believe that you are merely imagining a duck dabbling.

8.4 Thought
We often know that we are thinking, and what we are thinking about. Here
‘thinking’ is not supposed to be an umbrella term for cognition in general, but
should be taken in roughly the sense of ‘a penny for your thoughts’: mental
activities like pondering, ruminating, wondering, musing, and daydreaming all
count as thinking. In the intended sense of ‘thinking,’ thinking is not just
propositional: in addition to thinking that p, there is thinking of (or about) o.
Belief is necessary but not sufficient for thinking that p: thinking that p entails
believing that p, but not conversely.18
A particular example of thinking will be useful:
On summer afternoons in Canberra, the baking sun reflects off Lake Burley Griffin, and
the water shimmers. Up behind the university, in the botanical gardens, a cascading
stream of water helps to maintain the humidity of the rainforest gully. These are just a
couple of Kylie’s thoughts on the subject of water, her water thoughts. Amongst Kylie’s
many other thoughts that involve the concept of water are these: that there is water in the
lake, that trees die without water, that water is a liquid and, of course, that water is wet.
When Kylie thinks consciously, in a way that occupies her attention, she is able to know
what it is that she is thinking. This is true for thoughts about water, as for any other
thoughts. (Davies 2000a: 384–5)

How does Kylie know that she is thinking about water? Much of our thinking
“occurs in inner speech,” or in what Ryle calls an “internal monologue or silent
soliloquy” (1949: 28). In some sense—yet to be explained—one sometimes hears
oneself thinking, with “the mind’s ear.” A natural idea, then, is that Kylie knows
that she is thinking about water, and that she is thinking that trees die without
water, because she eavesdrops on herself uttering, in the silent Cartesian theater,
‘Trees die without water.’ That might not seem a promising starting point for a
transparency account, but let us suspend disbelief for the moment.19 (Thinking
about o is the subject of the next two sections; thinking that p will be treated in
section 8.4.3.)

18
In the simple present and past tenses (e.g.), ‘think that p’ is near-enough synonymous with
‘believe that p,’ as in ‘I think/thought that the pub is/was open.’
19
Indeed, one might not even think this is a promising starting point at all (e.g., Pitt 2004).
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. THOUGHT 

8.4.1 Outer and inner speech, and THINK


It is natural to take inner speech to be a kind of speech. But then the notion seems
positively paradoxical. When one engages in inner speech, there are no sounds in
one’s head—hence Ryle’s “silent soliloquy.” But, if there really is inner speech,
there are sequences of phonemes, and so sounds. Hence there is no such thing as
inner speech. For similar reasons, there is no “inner picture” of a purple kangaroo
that one is aware of when visualizing a purple kangaroo (see section 8.2.5).
Still, although there is literally no inner speech, speech is (phonologically)
represented. When Kylie “hears” her “internal monologue,” she is in a quasi-
perceptual state that represents an utterance of the sentence ‘Trees die without
water.’ Similarly, when Kylie “sees” an “inner picture” of a purple kangaroo, she is
in a quasi-perceptual state that represents the presence of something purple with
the characteristic kangaroo-look, or kangaroo-gestalt (ignoring a refinement
suggested in section 8.3). A lot of the experimental evidence concerns the
corresponding thesis for visual imagery: visual imagery involves representing
an arrangement of objects, akin to the way in which an arrangement of objects is
represented when one sees the scene before one’s eyes. Since the visual thesis
lends plausibility to the inner-speech thesis, the latter can be indirectly supported
by citing the evidence for the former. It would be unwise to rest the case entirely
on this parallel; fortunately, direct evidence is available too.20
If the content of visualizing is a degraded and transformed version of the
content of vision, then we may fairly conjecture that the same goes for the
content of auditory (more specifically, phonological) imagery. When Kylie
hears her internal monologue, the content of her phonological imagery is a
degraded and transformed version of her auditory experience of “outer” speech.
So to our collection of worlds we may add the world of inner speech, the totality of
s--facts, degraded and transformed versions of speech facts (s-facts)—those audi-
tory facts that concern speech. Corresponding to an s--fact there is an s--event, the
episode of inner speech characterized by the s--fact: think of the two as related
like the fact that Edmund Hillary climbed Everest and the event of Edmund
Hillary’s climb of Everest. Since we will only be interested in the episodes of inner
speech characterized by s--facts, there is no need for the previous schematic
singular term ‘x,’ but an explicit schematic marker (‘e’) for the corresponding
episode will make for easier reading. Thus we shall schematically write an s--fact
as ‘« . . . . . . »eS’; more generally, ‘« . . . . . . »eS’ expresses an s--proposition. When the

20
For a summary, see MacKay 1992. For neural overlap between audition and musical imagery,
see Zatorre and Halpern 2005. For the auditory Perky effect, see Okada and Matsuoka 1992.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

proposition « . . . . . . »eS is true, it is an s-fact, and there is such a thing as the


corresponding episode.
However, this true case has no application. When Kylie visualizes a purple
kangaroo, the content of her visualizing, the proposition that « . . . x . . . »V, is not
true. Likewise when she hears ‘Trees dies without water’ in her silent soliloquy.
All is silent, so the content of her phonological imagery, the proposition that
« . . . . . . »eS, is not true either. There is no such corresponding episode, any more
than there is such an event of Hillary Clinton’s climb of Everest.
When Kylie “hears” herself say ‘Trees die without water,’ she has not produced
any sounds for her to hear. She seems to hear her inner voice say ‘Trees die
without water,’ but this is a hallucination. Still, it seems plausible that this
appearance of an inner monologue enables Kylie to know that she is thinking
about water. Why should this be so?
Ryle noted that an important source of information about others is provided
by their “unstudied talk,” utterances that are “spontaneous, frank, and unpre-
pared” (1949: 173). Chatting with Kylie over a few beers is the best way of
discovering what she believes, wants, and intends. “Studied” talk, on the other
hand, is not so revealing. If Kylie is a politician defending the federal govern-
ment’s policies on water, she might assert that the water shortage will soon be
over without believing it will be.
However, in the umbrella sense of ‘thinking about o’ with which we are
concerned, both unstudied and studied talk provide excellent evidence about
the utterer’s thoughts. Even if the Hon. Kylie, MP, doesn’t believe that the water
shortage will soon be over, she was presumably thinking about water. Outer
speech on such-and-such topic is almost invariably produced by mental activity
about that same topic.21
If someone outwardly utters ‘The water shortage will soon be over’ then
(usually) she says something, namely that the water shortage will soon be over.
She says—and so thinks—something about water. Does the same point hold for
inner speech? An affirmative answer is not trivial, because the production of
inner speech, unlike the production of outer speech, might have nothing to do
with the semantics of the words. Perhaps an inward utterance of ‘The water
shortage will soon be over’ is produced in a similar semantics-insensitive manner
as the inward utterance of ‘Dum diddley,’ or other meaningless string.
But this possibility can be dismissed by noting that outer speech and inner
speech often perform the same function, moreover one for which the semantics

21
Not always: an actor might say ‘To be, or not to be: that is the question,’ without thinking (in
any robust sense) about suicide. The same point holds for inner speech, which the actor might use to
rehearse his lines.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. THOUGHT 

of the words is crucial. One may cajole or encourage oneself out loud; one may
also do so silently. Inner speech and outer speech may be seamlessly interleaved
in a conversation. One may recite a shopping list out loud to preserve it in
working memory; silent recitation will do just as well.22
By “hearing” her inner voice say ‘Trees die without water,’ then, Kylie can
know that she is thinking about water. As Ryle puts it:
We eavesdrop on our own voiced utterances and our own silent monologues. In noticing
these we are preparing ourselves to do something new, namely to describe the frames of
mind which these utterances disclose. (1949: 176)

When Kylie utters (out loud) ‘Trees die without water,’ she is aware of a current
acoustic episode, the utterance of that sentence. She is not aware of a mental
episode of thinking about water, although she can thereby become aware that she
is thinking about water.
When Kyle utters in silent soliloquy ‘Trees die without water,’ the story is
exactly the same, except that she is not aware of a current acoustic episode either.
And even if there were an “inner utterance,” occurring in some ethereal medium,
it would still be wrong to say that Kylie is aware of any mental episode. The outer
utterance is not itself an episode of thinking, but something produced by such an
episode; likewise, if there were (per impossibile) an inner utterance, it wouldn’t be
an episode of thinking either.
A rule now needs to be extracted from these points. Let us start with outer
speech. Using her ears, Kylie comes to know a certain speech fact: following our
notational convention, the fact that [ . . . . . . ]eS. Letting ‘thate’ refer to the speech
episode characterized by this fact, she may also know that she uttered thate, and
(using her ability to understand speech) that thate is about o. (‘About o’ should be
read in the ordinary informal sense, as in ‘Kylie is talking about the water
shortage.’) That is, one route to what one is thinking is the rule:
THINK-ALOUD If [ . . . . . . ]eS and you uttered thate, and thate is about o, believe
that you are thinking about o.
Since the case of silent soliloquy is parallel, the “inner” route to what one is
thinking should apparently be this:
THINKi If « . . . . . . »eS and you uttered thate, and thate is about o, believe that
you are thinking about o.

22
On one standard model of working memory, this involves the so-called “inner ear,” a short-
term memory store for phonological information (Baddeley 1986: ch. 5).
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

However, THINKi is not quite in the spirit of previous rules (for instance,
IMAG-DUCK in section 8.3), and this poses a problem. The subject herself
figures in the antecedent: to try to follow THINKi, Kylie has to believe that she
uttered thate. And where does that belief come from?
To address this problem, return to outer speech. If Bruce says ‘Trees die
without water’ to Kylie, she may well conclude that Bruce is thinking about water,
but not that she is. Hence—one might think—the need for ‘you uttered thate’ in
the antecedent of THINK-ALOUD. With that deleted, the rule would be no good at all.
How does Kylie know that she is saying out loud ‘Trees die without water’?
Perhaps her jaw and laryngeal movements tip her off. But this just makes the
problem for inner speech more acute: inner speech does not require moving one’s
jaw or larynx. In fact, though, the audible properties of her outer speech will do it.
This is not because Kylie has a unique Australian-Albanian accent, shared with
no one else, but because one’s own speech (when one is currently producing it)
sounds quite different from others’ speech. One’s current speech is heard partly
through bone and partly through air, which gives it a distinctive audible quality.
This is why people are surprised (and sometimes horrified) when they first hear a
recording of their own voice. Without the bone conduction, one’s voice sounds
like the voice of another. Kylie could rely on this audible distinguishing mark of
her current speech, and dispense with ‘you uttered thate’ entirely.
Inner speech is even better off in this regard: one cannot “hear” the inner
speech of another at all. As Ryle says, “I cannot overhear your silent colloquies
with yourself” (1949: 176). Put another way, one only has (apparent) access to
one’s own world of inner speech. ‘You uttered thate’ is accordingly redundant,
and we can replace THINKi with:
THINK If « . . . . . . »eS and thate is about o, believe that you are thinking about o.
If one follows THINK, one recognizes, hence knows, that the inner voice speaks
about x. Since there is no inner voice, there is no such knowledge to be had, and
one cannot follow THINK. In other words, the antecedent of this rule is always
false. However, trying to follow THINK will do.
8.4.2 Privileged and peculiar access
The supposition that we try to follow THINK can explain privileged and peculiar
access. We have already seen the explanation of the latter: to repeat Ryle, I cannot
overhear your silent colloquies with yourself.
What about privileged access? Consider the rule:
THINKK If [ . . . . . . ]eS and thate is uttered by Kylie, and is about o, believe that
Kylie is thinking about o.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. THOUGHT 

This is a good rule, as Ryle in effect observed. Suppose one follows THINKK on a
particular occasion: Kylie utters ‘Trees die without water’; one recognizes (hence
knows) that Kylie’s utterance is about water, and thereby concludes that Kylie is
thinking about water. At least in a typical case, one’s belief that Kylie is thinking
about water will then be true and amount to knowledge.
If one adopts the policy of following THINKK, one will not always succeed. In
particular, sometimes one will merely try to follow it. For instance, perhaps one
misidentifies the speaker as Kylie, or mishears her; the resulting belief about
Kylie’s thoughts will not then be knowledge. The policy of following THINKK
might well not produce, in perfectly ordinary situations, knowledge of what Kylie
thinks.
For comparison, suppose that Kylie adopts the policy of following THINK. If (as
Kylie would put it) she silently utters ‘Trees die without water,’ Kylie thereby
concludes that she is thinking about water. Kylie’s policy can never succeed: her
beliefs about her inner voice’s pronouncements are always false.
Although Kylie can never follow THINK, but only try to follow it, for the
purposes of attaining self-knowledge that doesn’t matter. If she tries to follow
THINK, and infers that she is thinking about o from the premises that « . . . . . . »eS
and that thate is about o, we may reasonably conjecture that she inwardly uttered
a sentence that was about o—after all, what else could the explanation be? In
which case she will be (almost invariably) thinking about o, and her conclusion
will be true.
THINK, like all the rules in this book apart from our old friend BEL, is not self-
verifying, still less strongly self-verifying. But like (for example) DES, it is strongly
self-verifying. The beliefs produced by trying to follow THINK are not absolutely
guaranteed to be knowledge. But they are very likely to be; much more so than the
beliefs produced by following third-person rules like THINKK. That kind of
epistemic access to our thoughts may be privileged enough.23

23
A notoriously puzzling symptom of schizophrenia is “thought insertion.” Patients claim that
certain thoughts are not their own, despite being (as they sometimes say) “in their minds.” The
present view of the epistemology of thought suggests that such patients “hear” their inner voice
speak about o, but do not (try to) follow THINK, and conclude that they are thinking about o. Instead,
they conclude (paradoxically) that although there is a thought about o in their minds, they are not
the thinker of that thought. If that is right, then the central question is why they do not (try to) follow
THINK.
Thought insertion is too complicated to properly discuss here, but the transparency account does
fit nicely with theories of thought insertion in which patients attribute their own inner speech to an
external agent. See, for example, Jones and Fernyhough 2007, and (for an even better fit) Langland-
Hassan 2008.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

8.4.3 Extensions: pictorial and propositional thinking


Sometimes our thinking “occurs in pictures” (as we say), not in words. One
external sign of thought is speech; another external sign is drawing-cum-
doodling. If Kylie is idly cartooning a duck on her legal pad, that’s a good
indication that she is thinking about a duck (or ducks). The inner analog of
drawing is (visual) imaging. Although this analogy is clearly not as close as that
between inner and outer speech, it is good enough to make the point that imagery
should be added to inner speech as another sign of thought. Imagery on the topic
of ducks is produced by mental activity about that same topic, which will often
fall under the (vague) rubric of ‘thinking.’ So the natural basic rule to cover the
case of “pictorial thinking” has the same antecedent as IMAG-DUCK (section 8.3):

THINK-IMAG-DUCK If « . . . x . . . »V and thatx is a duck, believe you are thinking


about a duck.

Sometimes one will not try to follow THINK-IMAG-DUCK and try to follow IMAG-
DUCK instead. Exactly when one will conclude that one is having duck-imagery as
opposed to concluding that one is having duck-thoughts is no doubt a messy
issue that varies from person to person. Although an investigation of this would
be too speculative to be worthwhile, we can at least say that if “thinking about a
duck” requires some reasonably sustained mental activity on the topic of ducks,
THINK-IMAG-DUCK is not as good as IMAG-DUCK. Like a duck-caricature dashed
off in a few seconds, a flicker of a duck-image doesn’t offer much insight into
one’s thoughts. Other cues are doubtless relevant, and here it is worth re-
emphasizing that the transparency approach can and should appeal to Ryleanism
when necessary.
Knowledge that one is thinking that p—that trees die without water, say—can
be accommodated by combining THINK with BEL:

THINK-THAT If « . . . . . . »eS and thate means that p, and p, believe that you are
thinking that p.

There is a puzzle about thinking of o that the present account resolves. We often
say we are thinking of o without being able to come up with any property that the
thought predicates of o. That is arguably a unique feature that distinguishes
thought from belief, desire, and intention.
Consider an example. One is thinking of Barry Humphries, say. Suppose one
knows this because one ostensibly discerns a visual image of Humphries.
Although one might not be thinking that Humphries is F, there is a reportable
predicational component to the thought—Humphries is imagined dressed as
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. THOUGHT 

Dame Edna, say, or as having black hair. But sometimes one can think of
Humphries with no apparent predicational component at all—one is simply
“thinking of Humphries” (cf. ‘Think of a number’). The explanation is that one
may simply inwardly utter the name ‘Barry Humphries’ (or—in the case of
“thinking of a number”—the numeral ‘7’).
8.4.4 Inner speech and imagined speech
So far we have loosely equated inner speech with auditory verbal imagery.
However, this elides a genuine distinction (cf. Alderson-Day and Fernyhough
2015: 954). One may recall someone speaking, for instance Winston Churchill
saying ‘We will fight on the beaches.’ Whimsically, one may auditorily imagine
Churchill saying (in Churchillian tones) ‘We’re the best kind of Sneetch on the
beaches’; one may also auditorily imagine oneself saying that line (in one’s own
voice). These are cases of auditory verbal imagery, but are intuitively not inner
speech. More to the present point, recalling Churchill saying ‘We will fight on the
beaches,’ or imagining Churchill saying ‘We’re the best kind of Sneetch on the
beaches’ will not thereby incline one to conclude that one is thinking about
fighting on beaches, or about Sneetches. (If anything, Churchill is a more
plausible topic.24) But if the (ostensible) world of inner speech contains these
utterances, why doesn’t one follow THINK?
Fortunately there are reasons to deny the antecedent. In other words, the
(ostensible) world of inner speech and—as we can put it—the (ostensible)
world of imagined speech are largely disjoint, at least as we actually encounter
them. (A small region of overlap can be tolerated.) Inner speech sounds different
from imagined speech, in a number of respects. As Hurlburt et al. put it (using
‘inner hearing’ for imagined speech25):
Most subjects, when aided by an iterative procedure that brackets presuppositions about
whether a particular experience is hearing or speaking, come to find that the distinction
between inner speaking and inner hearing is approximately as unambiguously clear as
that between speaking into a tape recorder and hearing your voice being played back.
(Hurlburt et al. 2013: 1485; references omitted)

Subjects’ reports should be treated with caution, of course, but there is evidence
that inner speech, in contrast to imagined speech, typically (a) is in one’s own

24
Further, there may be no reported topic at all. If I ask you to imagine someone saying ‘We’re
the best kind of Sneetch on the beaches’ and follow that up by asking what you’re thinking about, the
answer may well be ‘nothing.’
25
Hurlburt et al. do not explicitly equate inner hearing with imagined speech, but they appear to
be near-enough equivalent, as Gregory 2016: 660, fn. 6 notes. (However, they are distinguished in
Alderson-Day and Fernyhough 2015: 953–4.)
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

voice (or, more cautiously, not in the voice of another); (b) lacks auditory
characteristics like pitch and loudness; (c) lacks accent and sex. It can also be
(d) “condensed,” consisting of sentence fragments and, more commonly, (e) take
“the form of a conversation between different points of view” (Fernyhough 2016:
64), which Fernyhough calls dialogicality.26 Of course at least some of these
features may also characterize imagined speech—certainly (d) and (e). Could
some imagined speech have all of them? Alternatively, could some inner speech
lack all of them? Perhaps, but this is only problematic if in such cases subjects could
still distinguish imagined speech from inner speech. And that is far from clear.
Neuroimaging studies confirm that this is a significant distinction. For
example, Shergill et al. (2001) scanned subjects using fMRI in either an “inner-
speech” condition or three kinds of “imagery conditions.” In the inner-speech
condition, subjects were asked to “silently articulate a sentence of the form
‘I like . . . ’, or ‘I like being . . . ’,” ending in a word that they had just heard through
headphones. In the first of the three imagery conditions the instructions were the
same, “except that subjects had to imagine the sentence being spoken in their
own voice.” The other two imagery conditions changed the imagined voice and
the sentences from the first person to the second and third.
Even in the first imagery condition there was a detectable difference with the
inner-speech condition, for instance greater activation in the lateral temporal
lobe (a region associated with verbal monitoring). As one might have expected,
the difference was more pronounced for the second and third imagery condi-
tions. According to Shergill et al., the contrast between these two and the first
imagery condition is “consistent with the notion that imagining another’s voice
places more demands on covert articulation, engagement of auditory attention
and on verbal monitoring” (2001: 251). Imaging studies should be treated
cautiously in general; in the particular case of inner speech, as Fernyhough
(2016: 165) points out, the tasks do not reproduce the phenomenon as it occurs
in the wild. Still, subjects’ reports and neuroimaging studies fit together quite
nicely, and we may reasonably conjecture that inner speech and imagined speech
can be distinguished in a transparent manner, simply by attending to the
(ostensible) speech.

26
On (a), see Hurlburt et al. 2013: 1482. McCarthy-Jones and Fernyhough (2011) administered a
“Varieties of Inner Speech Questionnaire” (VISQ) to university students, and found that about a
quarter reported “other people in inner speech.” However, Cho and Wu (2014) and Gregory (2016)
plausibly argue that this result doesn’t impugn (a). On (b) and (c), see MacKay 1992: 128–30.
McCarthy-Jones and Fernyhough’s VISQ addressed (d) and (e), with about three-quarters of
subjects reporting dialogic and a third reporting condensed inner speech (broadly in line with
Langdon et al. 2009).
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

. THOUGHT 

8.4.5 Unsymbolized thinking and imageless thought


In “descriptive experience sampling” (DES), subjects are equipped with a beeper
that sounds at random intervals. They are instructed to “‘freeze’ their ongoing
experience and write a description of it in a notebook” (Hurlburt 1993: 10); later
(usually within a day) they will be extensively interviewed about their “inner
experience” just before the beep.27 Some subjects report what Hurlburt calls
“unsymbolized thinking,”
the experience of an inner process which is clearly a thought and which has a clear
meaning, but which seems to take place without symbols of any kind, that is, without
words, images, bodily sensations, etc. (1993: 5; see also Hurlburt and Akhter 2008)

Unsymbolized thinking seems to take place without words and images. If it


actually does, it is “imageless thought,” in terminology familiar from the eponym-
ous controversy in early twentieth-century psychology. And if subjects can know
their imageless thoughts, then this fact has not yet been explained.28 The obvious
danger is that the explanation will involve a faculty of inner sense, thus destroy-
ing the project of this book in the last few pages.
One explanation might be that subjects are doing some quick self-
interpretation: given my behavior, circumstances, and/or other topics of thought,
it is to be expected that I am thinking about o. (This hypothesis is argued for in
some detail in Carruthers 1996: 239–44; for a response, see Hurlburt and Akhter
2008: 1370–2.) There is another possibility: perhaps inner speech and visual
imagery did provide the basis for a THINK-style inference, but subjects are
reporting the content without reporting the speech or imagery itself.
Why would subjects fail to report inner speech? Perhaps they forgot, even in
the short time after the beep and before their note-taking. (There is plenty of
extra time for forgetting the details that are not explicitly noted, and which the
later DES interview is supposed to recover.) Remembering the message but not
the medium—a failure of “source monitoring”—is a familiar phenomenon. You
may remember that the water shortage will soon be over because you overheard
Kylie saying so, but because you were simultaneously engaged in arguing with
Bruce the memory of the source may soon vanish. The evanescent quality of
dreams is another illustration: on waking, sometimes dream imagery seems
rapidly to drain from memory, leaving only a residue of the dream topic behind.

27
For an extensive discussion of the DES methodology, see Hurlburt and Schwitzgebel 2007.
28
A qualification: “imageless thought” was sometimes taken to include imageless semantic
recollection. For example, remembering that diamonds are more costly than gold, with no accom-
panying relevant imagery, is taken by R. S. Woodworth (a distinguished American psychologist who
was taught by William James at Harvard) to be “imageless thought” (Woodworth 1906: 704).
Knowledge of that kind of “imageless thought” has been explained.
OUP CORRECTED PROOF – FINAL, 9/3/2018, SPi

 MEMORY , IMAGINATION , AND THOUGHT

Alternatively, perhaps the subject’s inner speech in these “unsymbolized


thinking” situations is significantly more degraded and transformed than it
usually is, thus inclining the subject to deny that there was any silent soliloquy.
(This may also affect how well the subjects can recall their inner speech.) There is
a useful comparison here with blindsight: subjects’ reports that they do not see
anything cannot be taken at face value, because a denial of sight may be a
reasonable—albeit mistaken—response to what is in fact “degraded . . . abnormal
vision” (Overgaard 2011: 478; cf. Philips and Block 2016: 167–8). Not all cases of
unsymbolized thinking may yield to the same explanation, of course.
Unsymbolized thinking may eventually prove to be transparency’s downfall.
But not yet.29

8.5 Finis
This book ends, appropriately enough, where modern philosophy began, with
knowledge of one’s thoughts. Perhaps partly because of the dialectical nature of
philosophy, which encourages dueling philosophers to find common ground
in first-person psychological claims of “intuitions” or “appearances,”30 Cartesian
sympathies are still widespread, as witnessed by contemporary epistemology’s
obsession with external-world skepticism. Instead of putting self-knowledge first,
the argument of this book is that it should be put last. Self-knowledge does not
require a special epistemic faculty; rather, it is underwritten by our independent
capacity to acquire knowledge of our (internal and external) environment.
However, it would be premature to announce that the problem of self-
knowledge has been solved. Despite ranging widely over the mental landscape,
many parts have been unexplored, or explored only cursorily. Perhaps more
importantly, the theory defended in this book depends on controversial claims,
most notably the idea that knowledge can be obtained by reasoning from
inadequate evidence, or from no evidence at all, and that perception and imagery
constitutively involve belief. Those claims were backed by independent argu-
ment, but are hardly beyond dispute.
The ambition has simply been to establish the transparency account as a
leading hypothesis, deserving of further examination. On the other hand, if this
book has inadvertently demonstrated that the transparency account collapses
under sustained scrutiny, then at least something has been achieved.

29
There are occasional reports of people lacking inner speech (e.g., Morin 2009); if this deficit
could be combined with aphantasia (loss of imagery; see Zeman et al. 2015) while sparing the usual
capacity to know one’s thoughts, then the transparency account would be falsified.
30
See Williamson 2007: 211–15.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

Bibliography

Alderson-Day, B., and C. Fernyhough. 2015. Inner speech: development, cognitive


functions, phenomenology, and neurobiology. Psychological Bulletin 141: 931–65.
Allison, H. E. 2004. Kant’s Transcendental Idealism: An Interpretation and Defense.
Revised Edition. New Haven, CT: Yale University Press.
Allport, G. W. 1955. Becoming: Basic Considerations for a Psychology of Personality. New
Haven, CT: Yale University Press.
Alston, W. P. 1971. Varieties of privileged access. American Philosophical Quarterly 8:
223–41. Page reference to the reprinting in Alston 1989.
Alston, W. P. 1989. Epistemic Justification: Essays in the Theory of Knowledge. Ithaca, NY:
Cornell University Press.
Anscombe, G. E. M. 1957. Intention. Ithaca, NY: Cornell University Press.
Ardila, A. 2016. Some unusual neuropsychological syndromes: somatoparaphrenia,
akinetopsia, reduplicative paramnesia, autotopagnosia. Archives of Clinical Neuro-
psychology 31: 456–64.
Armstrong, D. M. 1961. Perception and the Physical World. London: Routledge & Kegan
Paul.
Armstrong, D. M. 1962. Bodily Sensations. London: Routledge & Kegan Paul.
Armstrong, D. M. 1963. Is introspective knowledge incorrigible? Philosophical Review 72:
417–32.
Armstrong, D. M. 1968. A Materialist Theory of the Mind. London: Routledge & Kegan
Paul.
Armstrong, D. M. 1981a. The Nature of Mind and Other Essays. Ithaca, NY: Cornell
University Press.
Armstrong, D. M. 1981b. What is consciousness? The Nature of Mind and Other Essays.
Ithaca, NY: Cornell University Press.
Ashwell, L. 2013. Deep, dark . . . or transparent? Knowing our desires. Philosophical
Studies 165: 245–56.
Aydede, M. 2009. Is feeling pain the perception of something? Journal of Philosophy 106:
531–67.
Ayer, A. J. 1959. Privacy. Proceedings of the British Academy 45: 43–65. Page reference to
the reprint in Ayer 1963.
Ayer, A. J. 1963. The Concept of a Person and Other Essays. London: Macmillan.
Ayer, A. J. 1973. The Central Questions of Philosophy. London: Weidenfeld & Nicholson.
Page references to the 1976 Pelican reprint.
Baddeley, A. 1986. Working Memory. Oxford: Oxford University Press.
Baddeley, A. 1997. Human Memory. Hove, UK: Psychology Press.
Bain, D. 2007. The location of pains. Philosophical Papers 36: 171–205.
Barnett, D. J. 2016. Inferential justification and the transparency of belief. Noûs 50:
184–212.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BIBLIOGRAPHY

Bar-On, D. 2000. Speaking my mind. Philosophical Topics 28: 1–34.


Bar-On, D. 2004. Speaking My Mind: Expression and Self-Knowledge. Oxford: Oxford
University Press.
Bar-On, D. 2011. Externalism and skepticism: recognition, expression, and self-
knowledge. Self-Knowledge and the Self, ed. A. Coliva. Oxford: Oxford University Press.
Bayne, T., and E. Pacherie. 2005. In defence of the doxastic conception of delusions. Mind
and Language 20: 163–88.
Bem, D. J. 1972. Self-perception theory. Advances in Experimental Social Psychology 6, ed.
L. Berkowitz. New York: Academic Press.
Bennett, J. 1981. Morality and consequences. The Tanner Lectures on Human Values 2:
45–116.
Bermudez, J. L. 2013. Review of Peter Carruthers, The Opacity of Mind: An Integrative
Theory of Self-Knowledge. Mind 122: 263–6.
Berridge, K., and T. Robinson. 2003. Parsing reward. Trends in Neurosciences 26: 507–13.
Bilgrami, A. 2006. Self-Knowledge and Resentment. Cambridge, MA: Harvard University
Press.
Blackburn, S. 1998. Ruling Passions. Oxford: Oxford University Press.
Block, N. 1983. Mental pictures and cognitive science. Philosophical Review 93: 499–542.
Block, N. 1997a. Biology versus computation in the study of consciousness. Behavioral
and Brain Sciences 20: 159–65.
Block, N. 1997b. On a confusion about a function of consciousness. The Nature of
Consciousness, ed. N. Block, O. Flanagan, and G. Güzeldere. Cambridge, MA: MIT Press.
Block, N. 2003. Mental paint. Reflections and Replies: Essays on the Philosophy of Tyler
Burge, ed. M. Hahn and B. Ramberg. Cambridge, MA: MIT Press.
Block, N. 2013. The grain of vision and the grain of attention. Thought 1: 170–84.
Boghossian, P. 1989. Content and self-knowledge. Philosophical Topics 17: 5–26.
Boghossian, P. 1997. What the externalist can know a priori. Proceedings of the Aristotel-
ian Society 97: 161–75.
Bortolotti, L. 2010. Delusions and Other Irrational Beliefs. Oxford: Oxford University
Press.
Boylan, L. S., R. Staudinger, J. C. M. Brust, and O. Sacks. 2006. Correspondence re: Sudden
deafness from stroke. Neurology 67: 919.
Boyle, M. 2009. Two kinds of self-knowledge. Philosophy and Phenomenological Research
78: 133–63.
Boyle, M. 2011. Transparent self-knowledge. Aristotelian Society Supplementary Volume
85: 233–41.
Boyle, M. 2015. Critical Study: Cassam on self-knowledge for humans. European Journal
of Philosophy 23: 337–48.
Brady, T. F., T. Konkle, J. Gill, A. Oliva, and G. A. Alvarez. 2013. Visual long-term
memory has the same limit on fidelity as visual working memory. Psychological Science
24: 981–90.
Bratman, M. 1984. Two faces of intention. Philosophical Review 93: 375–405.
Bratman, M. 1985. Davidson’s theory of intention. Essays on Davidson, Actions and
Events, ed. B. Vermazen and M. Hintikka. Cambridge, MA: MIT Press. Page reference
to the reprint in Bratman 1987.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

BIBLIOGRAPHY 

Bratman, M. 1987. Intention, Plans, and Practical Reason. Cambridge, MA: Harvard
University Press.
Braun, D. 2015. Desiring, desires, and desire ascriptions. Philosophical Studies 172:
141–62.
Brogaard, B. 2012. What do we say when we say how or what we feel? Philosophers’
Imprint 12: 1–22.
Brogaard, B. (ed.) 2014. Does Perception Have Content? Oxford: Oxford University Press.
Brueckner, A. 1999. Two recent approaches to self-knowledge. Philosophical Perspectives
13: 251–71.
Burge, T. 1979. Individualism and the mental. Midwest Studies in Philosophy 4: 73–122.
Page reference to the reprint in Burge 2007.
Burge, T. 1991. Vision and intentional content. John Searle and His Critics, ed. E. LePore
and R. Van Gulick. Oxford: Blackwell.
Burge, T. 1996. Our entitlement to self-knowledge. Proceedings of the Aristotelian Society
96: 91–116. Page reference to the reprint in Burge 2013.
Burge, T. 2007. Foundations of Mind. Oxford: Oxford University Press.
Burge, T. 2013. Cognition Through Understanding. Oxford: Oxford University Press.
Burnyeat, M. F. 1982. Idealism and Greek philosophy: what Descartes saw and Berkeley
missed. Philosophical Review 91: 3–40.
Byrne, A. 2001. Intentionalism defended. Philosophical Review 110: 119–240.
Byrne, A. 2005. Introspection. Philosophical Topics 33: 79–104.
Byrne, A. 2009. Experience and content. Philosophical Quarterly 59: 429–51.
Byrne, A. 2011. Knowing what I want. Consciousness and the Self: New Essays, ed. J. Liu.
Cambridge: Cambridge University Press.
Byrne, A. 2012. Review of Peter Carruthers, The Opacity of Mind. Notre Dame Philosoph-
ical Reviews 2012.05.11.
Byrne, A. 2016. The epistemic significance of experience. Philosophical Studies 173:
947–67.
Byrne A., and D. R. Hilbert. 2003. Color realism and color science. Behavioral and Brain
Sciences 26: 3–21.
Carroll, N. 1990. The Philosophy of Horror: Or, Paradoxes of the Heart. London:
Routledge.
Carruthers, P. 1996. Language, Thought and Consciousness: An Essay in Philosophical
Psychology. Cambridge: Cambridge University Press.
Carruthers, P. 2011. The Opacity of Mind: An Integrative Theory of Self-Knowledge.
Oxford: Oxford University Press.
Carruthers, P., and J. B. Ritchie. 2012. The emergence of metacognition: affect and
uncertainty in animals. The Foundations of Metacognition, ed. M. Beran, J. Brandl,
J. Perner, and J. Proust. Oxford: Oxford University Press.
Cassam, Q. 2014. Self-Knowledge for Humans. Oxford: Oxford University Press.
Chalmers, D. J. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford:
Oxford University Press.
Chalmers, D. J. 2003. The content and epistemology of phenomenal belief. Consciousness:
New Philosophical Perspectives, ed. Q. Smith and A. Jokic. Oxford: Oxford University
Press.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BIBLIOGRAPHY

Child, W. 2007. Davidson on first person authority and knowledge of meaning. Noûs 41:
157–77.
Cho, R., and W. Wu. 2014. Is inner speech the basis of auditory verbal hallucination in
schizophrenia? Frontiers in Psychiatry 5: 75.
Churchland, P. M. 2013. Matter and Consciousness. 3rd ed. Cambridge, MA: MIT Press.
Clark, M. 1963. Knowledge and grounds: a comment on Mr. Gettier’s paper. Analysis 24:
46–8.
Craig, E. 1976. Sensory experience and the foundations of knowledge. Synthese 33: 1–24.
Crimmins, M. 1992. I falsely believe that p. Analysis 52: 191.
Dancy, J. 2000. Practical Reality. Oxford: Oxford University Press.
Das, N., and B. Salow. 2016. Transparency and the KK principle. Noûs doi:10.1111/
nous.12158.
Davidson, D. 1971. Agency. Agent, Action, and Reason, ed. R. Binkley, R. Bronaugh, and
A. Marras. Toronto: University of Toronto Press. Page reference to the reprint in
Davidson 1980.
Davidson, D. 1973. Radical interpretation. Dialectica 27: 313–28. Page reference to the
reprint in Davidson 1984b.
Davidson, D. 1974. Belief and the basis of meaning. Synthese 27: 309–23. Page reference to
the reprint in Davidson 1984b.
Davidson, D. 1975. Thought and talk. Mind and Language, ed. S. Guttenplan. Oxford:
Oxford University Press. Page reference to the reprint in Davidson 1984b.
Davidson, D. 1978. Intending. Philosophy of History and Action, ed. Y. Yovel. Dordrecht:
D. Reidel. Reprinted in Davidson 1980.
Davidson, D. 1980. Essays on Actions and Events. Oxford: Oxford University Press.
Davidson, D. 1984a. First person authority. Dialectica 38: 101–11. Page reference to the
reprint in Davidson 2001.
Davidson, D. 1984b. Inquiries into Truth and Interpretation. Oxford: Oxford University
Press.
Davidson, D. 1987. Knowing one’s own mind. Proceedings and Addresses of the American
Philosophical Association 60: 441–58. Page reference to the reprint in Davidson 2001.
Davidson, D. 1991a. Epistemology externalized. Dialectica 45: 191–202. Page reference to
the reprint in Davidson 2001.
Davidson, D. 1991b. Three varieties of knowledge. A. J. Ayer Memorial Essays, ed.
A. P. Griffiths. Cambridge: Cambridge University Press. Page reference to the reprint
in Davidson 2001.
Davidson, D. 1993. Reply to Bernhard Thöle. Reflecting Davidson: Donald Davidson
Responding to an International Forum of Philosophers, ed. R. Stoecker. Berlin: Walter
de Gruyter.
Davidson, D. 2001. Subjective, Intersubjective, Objective. Oxford: Oxford University Press.
Davies, M. 2000a. Externalism and armchair knowledge. New Essays on the A Priori, ed.
P. Boghossian and C. Peacocke. Oxford: Oxford University Press.
Davies, M. 2000b. Externalism, architecturalism, and epistemic warrant. Knowing Our Own
Minds, ed. C. Wright, B. Smith, and C. Macdonald. Oxford: Oxford University Press.
Debruyne, H., M. Portzky, F. Van den Eynde, and K. Audenaert. 2009. Cotard’s syn-
drome: a review. Current Psychiatry Reports 11: 197–202.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

BIBLIOGRAPHY 

Deckert, G. H. 1964. Pursuit eye movements in the absence of a moving visual stimulus.
Science 143: 1192–3.
Dennett, D. 1991. Consciousness Explained. New York: Little, Brown & Co.
Descartes, R. 1642/1984a. Author’s replies to the fifth set of objections. The Philosophical
Writings of Descartes, ed. J. Cottingham, R. Stoorthoff, and D. Murdoch. Cambridge:
Cambridge University Press.
Descartes, R. 1642/1984b. Meditations on first philosophy. The Philosophical Writings of
Descartes, ed. J. Cottingham, R. Stoorthoff, and D. Murdoch. Cambridge: Cambridge
University Press.
Dienes, Z., and J. Perner. 2007. Executive control without conscious awareness: the cold
control theory of hypnosis. Hypnosis and Conscious States: The Cognitive Neuroscience
Perspective, ed. G. A. Jamieson. Oxford: Oxford University Press.
Dogramaci, S. 2016. Knowing our degrees of belief. Episteme 13: 269–87.
Doi, T. 1981. The Anatomy of Dependence. New York: Kodansha America.
Dretske, F. 1994. Introspection. Proceedings of the Aristotelian Society 94: 263–78.
Dretske, F. 1995. Naturalizing the Mind. Cambridge, MA: MIT Press.
Dretske, F. 2003a. Externalism and self-knowledge. New Essays on Semantic Externalism
and Self-Knowledge, ed. S. Nuccetelli. Cambridge, MA: MIT Press.
Dretske, F. 2003b. How do you know you are not a zombie? Privileged Access: Philosoph-
ical Accounts of Self-Knowledge, ed. B. Gertler. Aldershot, UK: Ashgate.
Dunlap, K. 1912. The case against introspection. Psychological Review 19: 404–13.
Dunlosky, J., and J. Metcalfe. 2009. Metacognition. Thousand Oaks, CA: Sage Publications.
Dworkin, R. 2011. Justice for Hedgehogs. Cambridge, MA: Harvard University Press.
Edgeley, R. 1969. Reason in Theory and Practice. London: Hutchinson.
Ekman, P. 1999. Basic emotions. The Handbook of Cognition and Emotion, ed.
T. Dalgleish and T. Power. Brighton, UK: John Wiley & Sons.
Ekman, P., and W. V. Friesen. 1971. Constants across cultures in the face and emotion.
Journal of Personality and Social Psychology 17: 124–9.
Evans, G. 1982. The Varieties of Reference. Oxford: Oxford University Press.
Everson, S. 1997. Aristotle on Perception. Oxford: Oxford University Press.
Fallon, A. E., P. Rozin, and P. Pliner. 1984. The child’s conception of food: the develop-
ment of food rejections with special reference to disgust and contamination sensitivity.
Child Development 55: 566–75.
Falvey, K. 2000. The basis of first-person authority. Philosophical Topics 28: 69–99.
Fara, D. G. 2013. Specifying desires. Noûs 47: 250–72.
Farkas, K. 2008. The Subject’s Point of View. Oxford: Oxford University Press.
Fernández, J. 2007. Desire and self-knowledge. Australasian Journal of Philosophy 85:
517–36.
Fernández, J. 2013. Transparent Minds: A Study of Self-Knowledge. Oxford: Oxford
University Press.
Fernyhough, C. 2016. The Voices Within. London: Profile Books.
Fine, G. 2000. Descartes and ancient skepticism: reheated cabbage? Philosophical Review
109: 195–234.
Finkelstein, D. 2003. Expression and the Inner. Cambridge, MA: Harvard University
Press.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BIBLIOGRAPHY

Fodor, J. A. 1975. The Language of Thought. Cambridge, MA: Harvard University Press.
Freud, S. 1938. The Basic Writings of Sigmund Freud. New York: Random House.
Gallois, A. 1996. The World Without, the Mind Within: An Essay on First-Person
Authority. Cambridge: Cambridge University Press.
Ganis, G., W. L. Thompson, and S. M. Kosslyn. 2004. Brain areas underlying visual
mental imagery and visual perception: an fMRI study. Cognitive Brain Research 20:
226–41.
Geach, P. 1957. Mental Acts: Routledge & Kegan Paul.
Gertler, B. 2010. Self-Knowledge. New York: Routledge.
Gertler, B. 2011. Self-knowledge and the transparency of belief. Self-Knowledge, ed.
A. Hatzimoysis. Oxford: Oxford University Press.
Gertler, B. 2012. Renewed acquaintance. Introspection and Consciousness, ed. D. Smithies
and D. Stoljar. Oxford: Oxford University Press.
Gettier, E. 1963. Is justified true belief knowledge? Analysis 23: 121–3.
Glüer, K. 2009. In defence of a doxastic account of experience. Mind and Language 24:
297–327.
Goldenberg, G., W. Müllbacher, and A. Nowak. 1995. Imagery without perception: a case
study of anosognosia for cortical blindness. Neuropsychologia 33: 1373–82.
Goldman, A. 1993. The psychology of folk psychology. Behavioral and Brain Sciences 16:
15–28.
Goldman, A. 2006. Simulating Minds: The Philosophy, Psychology, and Neuroscience of
Mindreading. Oxford: Oxford University Press.
Gordon, R. M. 1996. “Radical” simulationism. Theories of Theories of Mind, ed.
P. Carruthers and P. Smith. Cambridge: Cambridge University Press.
Gregory, D. 2016. Inner speech, imagined speech, and auditory verbal hallucinations.
Review of Philosophy and Psychology 7: 653–73.
Grice, H. P. 1971. Intention and uncertainty. Proceedings of the British Academy 5:
263–79.
Haber, R. N. 1979. Twenty years of haunting eidetic imagery: where’s the ghost? Behav-
ioral and Brain Sciences 2: 583–629.
Haidt, J., S. H. Koller, and M. G. Dias. 1993. Affect, culture, and morality, or is it wrong to
eat your dog? Journal of Personality and Social Psychology 65: 613–28.
Hamlyn, D. W. 1968. Aristotle’s De Anima Books II and III. Oxford: Oxford University
Press.
Hamlyn, D. W. 1978. The phenomena of love and hate. Philosophy 53: 5–20.
Hansen, T., L. Pracejus, and K. R. Gegenfurtner. 2009. Color perception in the intermedi-
ate periphery of the visual field. Journal of Vision 9: 1–12.
Harman, G. 1986. Change in View. Cambridge, MA: MIT Press.
Hartmann, J., W. Wolz, D. Roeltgen, and F. Loverso. 1991. Denial of visual perception.
Brain and Cognition 16: 29–40.
Hellie, B. 2007a. No transparent self-knowledge of false beliefs. (Unpublished.)
Hellie, B. 2007b. That which makes the sensation of blue a mental fact: Moore on
phenomenal relationalism. European Journal of Philosophy 15: 334–66.
Hilbert, D. 1994. Is seeing believing? PSA: Proceedings of the Biennial Meeting of the
Philosophy of Science Association 1: 446–53.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

BIBLIOGRAPHY 

Hill, C. 2005. Ow! The paradox of pain. Pain, ed. M. Aydede. Cambridge, MA: MIT Press.
Hodges, J. R., and K. S. Graham. 2002. Episodic memory: insights from semantic dementia.
Episodic Memory: New Directions in Research, ed. A. Baddeley, M. Conway, and
J. Aggelton. Oxford: Oxford University Press.
Holland, R. F. 1954. The empiricist theory of memory. Mind 63: 464–86.
Holton, R. 2008. Partial belief, partial intention. Mind 117: 27–58.
Holton, R. 2014. Intention as a model for belief. Rational and Social Agency: Essays on the
Philosophy of Michael Bratman, ed. M. Vargas and G. Yaffe. Oxford: Oxford University
Press.
Humberstone, L. 2009. Smiley’s distinction between rules of inference and rules of proof.
The Force of Argument: Essays in Honor of Timothy Smiley, ed. J. Lear and A. Oliver.
London: Routledge.
Hume, D. 1740/1978. A Treatise of Human Nature, ed. L. A. Selby-Bigge. Oxford: Oxford
University Press.
Hurlburt, R. T. 1993. Sampling Inner Experience in Disturbed Affect. New York: Plenum
Press.
Hurlburt, R., and E. Schwitzgebel. 2007. Describing Inner Experience? Proponent Meets
Skeptic. Cambridge, MA: MIT Press.
Hurlburt, R. T., and S. A. Akhter. 2008. Unsymbolized thinking. Consciousness and
Cognition 17: 1364–74.
Hurlburt, R. T., C. L. Heavey, and J. M. Kelsey. 2013. Toward a phenomenology of inner
speaking. Consciousness and Cognition 22: 1477–94.
Hyman, J. 2003. Pains and places. Philosophy 78: 5–24.
Intraub, H., and J. E. Hoffman. 1992. Reading and visual memory: remembering scenes
that were never seen. American Journal of Psychology 105: 101–14.
Jack, R. E., C. Blais, C. Scheepers, P. G. Schyns, and R. Caldara. 2009. Cultural confusions
show that facial expressions are not universal. Current Biology 19: 1543–8.
Jackson, F. 1977. Perception: A Representative Theory. Cambridge: Cambridge University
Press.
Jellinek, J. S. 2004. Proust remembered: has Proust’s account of odor-cued autobiograph-
ical memory recall really been investigated? Chemical Senses 29: 455–8.
Johnston, M. 2004. The obscure object of hallucination. Philosophical Studies 120: 113–87.
Jones, S. R., and C. Fernyhough. 2007. Thought as action: inner speech, self-monitoring,
and auditory verbal hallucinations. Consciousness and Cognition 16: 391–9.
Kant, I. 1787/1933. Critique of Pure Reason. Translated by N. K. Smith. London:
Macmillan.
Kaplan, D. 1989. Demonstratives. Themes from Kaplan, ed. J. Almog, J. Perry, and
H. Wettstein. Oxford: Oxford University Press.
Katsafanas, P. 2015. Kant and Nietzsche on self-knowledge. Nietzsche and the Problem of
Subjectivity, ed. J. Constâncio, M. J. M. Branco, and R. Bartholomew. Berlin: de
Gruyter.
Kind, A. 2003. Shoemaker, self-blindness and Moore’s paradox. Philosophical Quarterly
53: 39–48.
Kosman, L. A. 1975. Perceiving that we perceive: On the Soul III, 2. Philosophical Review
84: 499–519.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BIBLIOGRAPHY

Kosslyn, S. M. 1994. Image and Brain: The Resolution of the Imagery Debate: Cambridge,
MA: MIT Press.
Kosslyn, S. M., W. L. Thompson, and G. Ganis. 2006. The Case for Mental Imagery.
Oxford: Oxford University Press.
Kripke, S. A. 1982. Wittgenstein on Rules and Private Language. Cambridge, MA: Harvard
University Press.
Kripke, S. A. 2011. Unrestricted exportation and some morals for the philosophy of
language. Philosophical Troubles: Collected Papers Vol. I. Oxford: Oxford University
Press.
Kunda, Z. 2002. Social Cognition: Making Sense of People. Cambridge, MA: MIT Press.
Laeng, B., I. M. Bloem, S. D’Ascenzo, and L. Tommasi. 2014. Scrutinizing visual images:
the role of gaze in mental imagery and memory. Cognition 131: 263–83.
Langdon, R., S. Jones, E. Connaughton, and C. Fernyhough. 2009. The phenomenology of
inner speech: comparison of schizophrenia patients with auditory verbal hallucinations
and healthy controls. Psychological Medicine 39: 655–63.
Langland-Hassan, P. 2008. Fractured phenomenologies: thought insertion, inner speech,
and the puzzle of extraneity. Mind and Language 23: 369–401.
Langton, R. 2001. Kantian Humility: Our Ignorance of Things in Themselves. Oxford:
Oxford University Press.
Lawlor, K. 2009. Knowing what one wants. Philosophy and Phenomenological Research 79:
47–75.
Lepore, E., and K. Ludwig. 2005. Donald Davidson: Meaning, Truth, Language, and
Reality. Oxford: Oxford University Press.
Locke, J. 1689/1975. An Essay Concerning Human Understanding. Oxford: Oxford Uni-
versity Press.
Loftus, E. F. 1996. Eyewitness Testimony. Cambridge, MA: Harvard University Press.
Lycan, W. G. 1987. Consciousness. Cambridge, MA: MIT Press.
Lycan, W. G. 1996. Consciousness and Experience. Cambridge, MA: MIT Press.
Lycan, W. G. 2003. Dretske’s ways of introspecting. Privileged Access: Philosophical
Accounts of Self-Knowledge, ed. B. Gertler. Aldershot, UK: Ashgate.
Lyons, J. 2009. Perception and Basic Beliefs: Zombies, Modules, and the Problem of the
External World. Oxford: Oxford University Press.
Lyons, W. E. 1986. The Disappearance of Introspection. Cambridge, MA: MIT Press.
MacKay, D. G. 1992. Constraints on theories of inner speech. Auditory Imagery, ed.
D. Reisberg. Hillsdale, NJ: Lawrence Erlbaum.
Mackie, J. L. 1977. Ethics: Inventing Right and Wrong. London: Penguin.
Manley, D. 2007. Safety, content, apriority, self-knowledge. Journal of Philosophy 104:
403–23.
Martin, M. G. F. 2006. On being alienated. Perceptual Experience, ed. T. Gendler and
J. Hawthorne. Oxford: Oxford University Press.
Matthen, M. 2016. Is perceptual experience normally multimodal? Current Controversies
in Philosophy of Perception, ed. B. Nanay. New York: Routledge.
McCarthy-Jones, S., and C. Fernyhough. 2011. The varieties of inner speech: links
between quality of inner speech and psychopathological variables in a sample of
young adults. Consciousness and Cognition 20: 1586–93.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

BIBLIOGRAPHY 

McDowell, J. 1982. Criteria, defeasibility, and knowledge. Proceedings of the British


Academy 68: 455–79.
McDowell, J. 1988. Projection and truth in ethics. The Lindley Lecture, University of
Kansas. Kansas: University of Kansas.
McDowell, J. 2000. Response to Crispin Wright. Knowing Our Own Minds, ed. C. Wright,
B. C. Smith, and C. Macdonald. Oxford: Oxford University Press.
McDowell, J. 2011. David Finkelstein on the inner. Teorema 30: 15–24.
McFetridge, I. 1990. Explicating “x knows a priori that p.” Logical Necessity and Other
Essays. London: Aristotelian Society.
McGinn, C. 1975/6. A posteriori and a priori knowledge. Proceedings of the Aristotelian
Society 76: 195–208.
McGinn, C. 2004. Mindsight: Image, Dream, Meaning. Cambridge, MA: Harvard
University Press.
McHugh, C. 2010. Self-knowledge and the KK principle. Synthese 173: 231–57.
McKinsey, M. 1991. Anti-individualism and privileged access. Analysis 51: 9–16.
Mercier, H., and D. Sperber. 2011. Why do humans reason? Arguments for an argumen-
tative theory. Behavioral and Brain Sciences 34: 57–111.
Miller, W. I. 1998. The Anatomy of Disgust. Cambridge, MA: Harvard University Press.
Moltmann, F. 2013. Abstract Objects and the Semantics of Natural Language. Oxford:
Oxford University Press.
Moore, G. E. 1903. The refutation of idealism. Mind 7: 1–30.
Moran, R. 2001. Authority and Estrangement. Princeton, NJ: Princeton University Press.
Moran, R. 2003. Responses to O’Brien and Shoemaker. European Journal of Philosophy
11: 402–19.
Moran, R. 2004. Replies to Heal, Reginster, Wilson, and Lear. Philosophy and Phenom-
enological Research 69: 455–73.
Moran, R. 2011. Self-knowledge, “transparency,” and the forms of activity. Introspection
and Consciousness, ed. D. Smithies and D. Stoljar. New York: Oxford University Press.
Morin, A. 2009. Self-awareness deficits following loss of inner speech: Dr. Jill Bolte
Taylor’s case study. Consciousness and Cognition 18: 524–9.
Nagel, T. 1970. The Possibility of Altruism. Oxford: Oxford University Press.
Newell, B. R., and D. R. Shanks. 2014. Unconscious influences on decision making: a
critical review. Behavioral and Brain Sciences 37: 1–19.
Nichols, S., and S. Stich. 2003. Mindreading: An Integrated Account of Pretence, Self-
Awareness, and Understanding Other Minds. Oxford: Oxford University Press.
Nisbett, R. E., and T. D. Wilson. 1977a. The halo effect: evidence for unconscious
alteration of judgments. Journal of Personality and Social Psychology 35: 250–6.
Nisbett, R. E., and T. D. Wilson. 1977b. Telling more than we can know: verbal reports on
mental processes. Psychological Review 84: 231–59.
Noordhof, P. 2001. In pain. Analysis 61: 95–7.
Noordhof, P. 2005. In a state of pain. Pain: New Essays on Its Nature and the Methodology
of Its Study, ed. M. Aydede. Cambridge, MA: MIT Press.
O’Callaghan, C. 2007. Sounds: A Philosophical Theory. Oxford: Oxford University Press.
O’Callaghan, C. 2016. Enhancement through coordination. Current Controversies in
Philosophy of Perception, ed. B. Nanay. New York: Routledge.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BIBLIOGRAPHY

Okada, H., and K. Matsuoka. 1992. Effects of auditory imagery on the detection of a pure
tone in white noise: experimental evidence of the auditory Perky effect. Perceptual and
Motor Skills 74: 443–8.
Overgaard, M. 2011. Visual experience and blindsight: a methodological review. Experi-
mental Brain Research 209: 473–9.
Overgaard, M., and J. Mogensen. 2015. Reconciling current approaches to blindsight.
Consciousness and Cognition 32: 33–40.
Pasnau, R. 2002. Thomas Aquinas on Human Nature: A Philosophical Study of Summa
Theologiae 1a 75–89. Cambridge: Cambridge University Press.
Paul, S. K. 2009. How we know what we’re doing. Philosophers’ Imprint 9: 1–24.
Paul, S. K. 2012. How we know what we intend. Philosophical Studies 161: 327–46.
Paul, S. K. 2015. The transparency of intention. Philosophical Studies 172: 1529–48.
Pautz, A. 2010. Why explain visual experience in terms of content? Perceiving the World,
ed. B. Nanay. Oxford: Oxford University Press.
Peacocke, C. 1979. Holistic Explanation. Oxford: Clarendon Press.
Peacocke, C. 1983. Sense and Content. Oxford: Oxford University Press.
Peacocke, C. 1992. A Study of Concepts. Cambridge, MA: MIT Press.
Peacocke, C. 1998. Conscious attitudes, attention, and self-knowledge. Knowing
Our Own Minds, ed. C. Wright, B. C. Smith, and C. Macdonald. Oxford: Oxford
University Press.
Peels, R. 2016. The empirical case against introspection. Philosophical Studies 173: 2461–85.
Perky, C. W. 1910. An experimental study of imagination. American Journal of Psychology
21: 422–52.
Perl, E. R. 1996. Pain and the discovery of nociceptors. Neurobiology of Nociceptors, ed.
C. Belmonte and F. Cervero. Oxford: Oxford University Press.
Philips, I., and N. Block. 2016. Debate on unconscious perception. Current Controversies
in Philosophy of Perception, ed. B. Nanay. London: Routledge.
Pitcher, G. 1970. Pain perception. Philosophical Review 74: 368–93.
Pitt, D. 2004. The phenomenology of cognition, or, what is it like to think that p?
Philosophy and Phenomenological Research 1: 1–36.
Price, H. H. 1932. Perception. London: Methuen.
Putnam, H. 1975. The meaning of “meaning.” Minnesota Studies in the Philosophy of
Science 7: 131–93.
Recanati, F. 2007. Perspectival Thought. Oxford: Oxford University Press.
Reid, T. 1785/1941. Essays on the Intellectual Powers of Man, ed. A. D. Woozley. London:
Macmillan.
Roche, M. 2013. A difficulty for testing the inner sense theory of introspection. Philosophy
of Science 80: 1019–30.
Rosenthal, D. 2005. Consciousness and Mind. Oxford: Oxford University Press.
Rozin, P., and A. E. Fallon. 1987. A perspective on disgust. Psychological Review 94: 23–41.
Rozin, P., J. Haidt, and C. R. McCauley. 2000. Disgust. Handbook of Emotions, ed.
M. Lewis and J. Haviland-Jones. New York: Guilford Press.
Rozin, P., L. Millman, and C. Nemeroff. 1986. Operation of the laws of sympathetic magic
in disgust and other domains. Journal of Personality and Social Psychology 50: 703.
Russell, B. 1912/98. The Problems of Philosophy. Oxford: Oxford University Press.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

BIBLIOGRAPHY 

Russell, B. 1921/95. The Analysis of Mind. London: Routledge.


Ryle, G. 1949. The Concept of Mind. London: Hutchinson.
Samoilova, K. 2016. Transparency and introspective unification. Synthese 193: 3363–81.
Sartre, J. P. 1966. Being and Nothingness. Translated by H. Barnes. New York: Simon and
Schuster.
Scanlon, T. 1998. What We Owe to Each Other. Cambridge, MA: Harvard University
Press.
Scarry, E. 1985. The Body in Pain. Oxford: Oxford University Press.
Schellenberg, S. 2010. The particularity and phenomenology of perceptual experience.
Philosophical Studies 149: 19–48.
Schiffer, S. 1981. Truth and the theory of content. Meaning and Understanding, ed.
H. Parret and J. Bouveresse. Berlin: Walter de Gruyter.
Schueler, G. F. 1995. Desire: Its Role in Practical Reason and the Explanation of Action.
Cambridge, MA: MIT Press.
Schwitzgebel, E. 2004. Introspective training apprehensively defended: reflections on
Titchener’s lab manual. Journal of Consciousness Studies 11: 58–76.
Schwitzgebel, E. 2008. The unreliability of naive introspection. Philosophical Review 117:
245–73.
Schwitzgebel, E. 2011. Perplexities of Consciousness. Cambridge, MA: MIT Press.
Schwitzgebel, E. 2012. Introspection, what? Introspection and Consciousness, ed.
D. Smithies and D. Stoljar. Oxford: Oxford University Press.
Searle, J. R. 1983. Intentionality. Cambridge: Cambridge University Press.
Searle, J. 2015. Seeing Things as They Are: A Theory of Perception. Oxford: Oxford
University Press.
Segal, S. J. 1972. Assimilation of a stimulus in the construction of an image: the Perky
effect revisited. The Function and Nature of Imagery, ed. P. W. Sheehan. New York:
Academic Press.
Sellars, W. 1969. Language as thought and as communication. Philosophy and Phenom-
enological Research 29: 506–27.
Setiya, K. 2007. Reasons without Rationalism. Princeton, NJ: Princeton University Press.
Setiya, K. 2011. Knowledge of intention. Essays on Anscombe’s Intention, ed. A. Ford,
J. Hornsby, and F. Stoutland. Cambridge, MA: Harvard University Press.
Setiya, K. 2012. Transparency and inference. Proceedings of the Aristotelian Society 112:
263–8.
Shah, N., and J. Velleman. 2005. Doxastic deliberation. Philosophical Review 114:
497–534.
Shergill, S. S., E. Bullmore, M. Brammer, S. Williams, R. Murray, and P. McGuire. 2001.
A functional study of auditory verbal imagery. Psychological Medicine 31: 241–53.
Sherrington, C. S. 1906. The Integrative Action of the Nervous System. New York: Charles
Scribner’s Sons.
Shoemaker, S. 1963. Self-Knowledge and Self-Identity. Ithaca, NY: Cornell University
Press.
Shoemaker, S. 1968. Self-reference and self-awareness. Journal of Philosophy 65: 555–67.
Shoemaker, S. 1988. On knowing one’s own mind. Philosophical Perspectives 2: 183–209.
Page reference to the reprint in Shoemaker 1996.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BIBLIOGRAPHY

Shoemaker, S. 1990. Qualities and qualia: what’s in the mind? Philosophy and Phenom-
enological Research 50: 507–24. Page reference to the reprint in Shoemaker 1996.
Shoemaker, S. 1994. Self-knowledge and “inner sense.” Philosophy and Phenomenological
Research 54: 249–314. Page reference to the reprint in Shoemaker 1996.
Shoemaker, S. 1996. The First-Person Perspective and Other Essays. Cambridge: Cam-
bridge University Press.
Shoemaker, S. 2003. Moran on self-knowledge. European Journal of Philosophy 3: 391–401.
Shoemaker, S. 2011. Self-intimation and second-order belief. Introspection and Conscious-
ness, ed. D. Smithies and D. Stoljar. New York: Oxford University Press.
Siegel, S. 2010. The Contents of Visual Experience. Oxford: Oxford University Press.
Siegel, S., and A. Byrne. 2016. Rich or thin? Current Controversies in Philosophy of
Perception, ed. B. Nanay. New York: Routledge.
Simons, M. 2007. Observations on embedding verbs, evidentiality, and presupposition.
Lingua 117: 1034–56.
Smart, J. J. C. 1959. Sensations and brain processes. Philosophical Review 68: 141–56.
Smith, A. D. 2001. Perception and belief. Philosophy and Phenomenological Research 62:
283–309.
Smith, M. 1994. The Moral Problem. Oxford: Blackwell.
Smithies, D., and D. Stoljar. 2011. Introspection and consciousness: an overview. Intro-
spection and Consciousness, ed. D. Smithies and D. Stoljar. New York: Oxford Univer-
sity Press.
Soames, S. 2003. Philosophical Analysis in the Twentieth Century: Volume 1, The Dawn of
Analysis. Princeton, NJ: Princeton University Press.
Sosa, E. 1999. How to defeat opposition to Moore. Philosophical Perspectives 13: 141–53.
Sosa, E. 2002. Privileged access. Consciousness: New Philosophical Perspectives, ed.
Q. Smith and A. Jokic. Oxford: Oxford University Press.
Squire, L. R. 1992. Declarative and nondeclarative memory: multiple brain systems
supporting learning and memory. Journal of Cognitive Neuroscience 4: 232–43.
Stanley, J., and T. Williamson. 2001. Knowing how. Journal of Philosophy 99: 411–44.
Stoljar, D. 2012. Introspective knowledge of negative facts. Philosophical Perspectives 26:
389–410.
Stone, T., and A. Young. 1997. Delusions and brain injury: the philosophy and psychology
of belief. Mind and Language 12: 327–64.
Stroud, B. 1984. The Significance of Philosophical Scepticism. Oxford: Oxford University
Press.
Sutin, A. R., and R. W. Robins. 2008. When the “I” looks at the “me”: autobiographical
memory, visual perspective, and the self. Consciousness and Cognition 17: 1386–97.
Ten Elshof, G. 2005. Introspection Vindicated. Aldershot, UK: Ashgate.
Thöle, B. 1993. The explanation of first person authority. Reflecting Davidson: Donald
Davidson Responding to an International Forum of Philosophers, ed. R. Stoecker. Berlin:
Walter de Gruyter.
Thomas, N. J. T. 2016. Mental imagery. Stanford Encyclopedia of Philosophy, ed.
E. N. Zalta, https://plato.stanford.edu/archives/spr2017/entries/mental-imagery.
Thomasson, A. 2003. Introspection and phenomenological method. Phenomenology and
the Cognitive Sciences 2: 239–54.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

BIBLIOGRAPHY 

Thomasson, A. L. 2005. First-person knowledge in phenomenology. Phenomenology and


Philosophy of Mind, ed. D. W. Smith and A. L. Thomasson. Oxford: Oxford University
Press.
Thomson, J. J. 2008. Normativity. Chicago, IL: Open Court.
Titchener, E. B. 1899. A Primer of Psychology. London: Macmillan.
Tulving, E. 1972. Episodic and semantic memory. Organization of Memory, ed. E. Tulving
and W. Donaldson. New York: Academic Press.
Tulving, E. 1983. Elements of Episodic Memory. Oxford: Oxford University Press.
Tulving, E. 2001. Episodic memory and common sense: how far apart? Philosophical
Transactions: Biological Sciences 356: 1505–15.
Tulving, E. 2002. Episodic memory: from mind to brain. Annual Review of Psychology 53:
1–25.
Tye, M. 1991. The Imagery Debate. Cambridge, MA: MIT Press.
Tye, M. 2000. Consciousness, Color, and Content. Cambridge, MA: MIT Press.
Tye, M. 2005a. Another look at representationalism about pain. Pain, ed. M. Aydede.
Cambridge, MA: MIT Press.
Tye, M. 2005b. In defense of representationalism: reply to commentaries. Pain, ed.
M. Aydede. Cambridge, MA: MIT Press.
Tye, M. 2014. What is the content of a hallucinatory experience? Does Perception have
Content?, ed. B. Brogaard. Oxford: Oxford University Press.
Valaris, M. 2011. Transparency as inference: reply to Alex Byrne. Proceedings of the
Aristotelian Society 111: 319–24.
Van Cleve, J. 2012. Defining and defending nonconceptual contents and states. Philo-
sophical Perspectives 26: 411–30.
Vazire, S., and T. D. Wilson (eds.) 2012. Handbook of Self-Knowledge. New York: Guilford
Press.
Velleman, D. 1989. Practical Reflection. Princeton, NJ: Princeton University Press.
Velleman, D. 2000. The Possibility of Practical Reason. Oxford: Oxford University
Press.
Warfield, T. A. 2005. Knowledge from falsehood. Philosophical Perspectives 19: 405–16.
Williamson, T. 2000. Knowledge and Its Limits. Oxford: Oxford University Press.
Williamson, T. 2004. Armchair philosophy, metaphysical modality and counterfactual
thinking. Proceedings of the Aristotelian Society 105: 1–23.
Williamson, T. 2007. The Philosophy of Philosophy. Oxford: Blackwell.
Williamson, T. 2009a. Probability and danger. The Amherst Lecture in Philosophy 4: 1–35.
Williamson, T. 2009b. Replies to critics. Williamson on Knowledge, ed. P. Greenough and
D. Pritchard. Oxford: Oxford University Press.
Wilson, M. D. 1982. Descartes. London: Routledge.
Wilson, T. D. 2002. Strangers to Ourselves: Discovering the Adaptive Unconscious. Cambridge,
MA: Harvard University Press.
Wittgenstein, L. 1958. Philosophical Investigations. 2nd ed. Oxford: Basil Blackwell.
Wittgenstein, L. 1969. The Blue and Brown Books. Oxford: Basil Blackwell.
Woodworth, R. S. 1906. Imageless thought. Journal of Philosophy, Psychology and Scientific
Methods 3: 701–8.
Wright, C. 1992. Truth and Objectivity. Cambridge, MA: Harvard University Press.
OUP CORRECTED PROOF – FINAL, 8/3/2018, SPi

 BIBLIOGRAPHY

Wright, C. 2000. Self-knowledge: the Wittgensteinian legacy. Knowing Our Own Minds,
ed. C. Wright, B. C. Smith, and C. Macdonald. Oxford: Oxford University Press.
Wright, C. 2001a. The problem of self-knowledge (I). Rails to Infinity: Essays on Themes
from Wittgenstein’s Philosophical Investigations. Cambridge, MA: Harvard University
Press.
Wright, C. 2001b. The problem of self-knowledge (II). Rails to Infinity: Essays on Themes
from Wittgenstein’s Philosophical Investigations. Cambridge, MA: Harvard University
Press.
Zatorre, R. J., and A. R. Halpern. 2005. Mental concerts: musical imagery and auditory
cortex. Neuron 47: 9–12.
Zeman, A., M. Dewar, and S. Della Sala. 2015. Lives without imagery: congenital aphantasia.
Cortex 3: 378–80.
OUP CORRECTED PROOF – FINAL, 21/3/2018, SPi

Index

acquaintance 16 n. 25 Blackburn, S. 64
Alderson-Day, B. 205 blindness 34, 44, 49, 157–8, 187
alienated belief 39–40, 41 blindsight 38, 133 nn. 5, 208
Allison, H. 21 n. 36 Block, N. 129 n. 2, 133 n. 4–5, 154 n. 33,
Allport, G. 175 193 n. 15, 208
Alston, W. 5 Boghossian, P. 8 n. 13, 9, 11, 26 n. 3, 29–32
amodal problem 130–1, 139–40, 149–50 BonJour, L. J. 32, 33 n. 8
animals, non-human 54 n. 11, 87, 102, 145, 173, Bortolotti, L. 146 n. 25
193 Boylan, L. 34 n. 12
Anscombe, G. E. M. 58, 156, 168, 170–1 Boyle, M. 80, 122–4, 157, 167 n. 18
Anton’s syndrome 34 n. 12, 187 Brady, T. F. 141 n. 16
reverse Anton’s syndrome 133 n. 4 Bratman, M. 168 n. 20, 169 n. 21, 170
aphantasia 208 n. 29 Braun, D. 167 n. 16
a priori 9, 11, 62, 65 n. 21 broad perceptual model 27–8, 29, 32
Aquinas, T. 17 Brogaard, B. 136 n. 10, 148 n. 27
Ardila, A. 34 n. 12 Brueckner, A. 79 n. 5
Aristotle 17, 113–14 Burge, T. 26 n. 3, 35–6, 100 n. 1, 136 n. 9
Armstrong, D. M. 7 n. 11 Burnyeat, M. F. 20 n. 35
and inner sense 9, 18, 25, 26 n. 3, 27, 29, Byrne, A. 12 n. 18, 101 n. 2, 122–3, 136 n. 10,
32–3, 40, 42 nn. 20–1, 123 137 n. 11, 139 n. 13, 144 n. 24, 166
and pain perception 149 n. 28, 151, 153–4
and perception 136, 139, 144 Carroll, N. 174 n. 30
Ashwell, L. 166, 167 n. 18 Carruthers, P. 3 n. 4, 5 n. 9, 12 n. 18, 50 n. 1,
assertion, knowledge norm of 38, 103 119 n. 21, 207
auditory world 140, 185 Cassam, Q. 1 n. 2, 31–3, 39 n. 19, 50 n. 1,
Austen, J. 173 167 n.18
avowals 33, 62–73 Chalmers, D. 16 n. 24, 83 n. 12
attitudinal 43 Child, W. 54
phenomenal 33–8, 42, 63, 87–8 Cho, R. 206 n. 26
Aydede, M. 149 n. 28 Churchill, W. 173, 205
Ayer, A. J. 8, 18–19, 20–2 Churchland, P. 25–6, 40, 42 nn. 20–1, 48
clairvoyance 31–3
Baddeley, A. 185, 201 n. 22 Clark, M. 77 n. 2
Bain, D. 149 n. 28 conceptual content 144 n. 24
Bar-On, D. 16 n. 24, 50–1, 62–73, 87–9, 124–6, confabulation 13, 47, 158 n. 4
151 n. 30, 157 n. 2 CONFIDENCE 120
Barnett, D. 121 n. 22 content view 136, 150 n. 29
baselessness 42, 124–5 counterfactuals 143, 163–4
Bayne, T. 146 n. 25 Craig, E. 144, 146
BEL 102, 103, 129, 161 credence 119
BEL-3 105, 108 Crimmins, M. 39 n. 18
belief critical reasoning 100 n. 1
box 115–16
dependence 143–6, 155, 193 Dancy, J. 160 n. 10
Bem, D. 11–12 Das, N. 116 n. 18
Bennett, J. 170 Davidson, D. 8, 16, 22, 26 n. 3, 51–7, 123,
Bermúdez, J. L. 12 n. 18 160 n. 10, 168 n. 20, 170
Berridge, K. 167 n. 17 Davies, M. 9 n. 15, 11, 198
Bilgrami, A. 3 n. 4, 16 n. 24 Debruyne, H. 34 n. 12
OUP CORRECTED PROOF – FINAL, 21/3/2018, SPi

 INDEX

Deckert, G. H. 186 first-person authority 40–1, 42, 50–7, 88, 157


defeasibility 135 n. 7, 164–6, 168, 171, 181 Fodor, J. 115
degrading, of content 141, 188, 189, 190, 199, 208 following a rule, see rules
Dennett, D. 38 n. 15 Frege-Geach-Ross problem 64
Descartes, R. 1, 4, 17, 18–21, 44, 94 Freud, S. 7, 10 n. 16, 25 n. 1
DES 161
descriptive experience sampling 207 Gallois, A. 4 n. 7, 75–9, 99–100, 113,
detectivism 161 n. 11
defined 15 Ganis, G. 186
and self-knowledge Geach, P. 44 n. 23, 48 n. 28, 64, 94 n. 22
of belief 115–16 Gertler, B. 16 n. 25, 79 n. 5, 126–7
of desire, intention, and disgust 182 Gettier, E. 76–7, 77 n. 2, 107
dialogicality 206 Glüer, K. 146
Dienes, Z. 172 n. 24 Goldenberg, G. 187
DIS 178 Goldman, A. 16, 25 n. 1, 157 n. 2
dissociations 133, 134 n. 5, 157–8, 185 n. 6 Gordon, R. M. 4 n. 7, 89 n. 18, 157 n. 1
Dogramaci, S. 119 n. 21 Graham, K. S. 185 n. 6
Doi, T. 173 n. 28 Gregory, D. 205 n. 25, 206 n. 26
doxastic schema, defined 75 Gregory, R. L. 142
Dretske, F. 3 n. 6, 83–7, 90–2, 95–6, 100 Grice, H. P. 169 n. 21
Dunlap, K. 197–8 groundlessness 42–3, 51 n. 2, 69, 70, 156
Dunlosky, J. 1 n. 1, 118, 120
Dworkin, R. 178 Haber, R. N. 186
Haidt, J. 181 n. 38
E=K 2, 47, 172 hallucination 145, 200
economy of pain 150–7
defined 14 Halpern, A. R. 199 n. 20
and self-blindness 158 Hamlyn, D. W. 17, 173
and self-knowledge Hansen, T. 38 n. 15
of belief 112–14, 121 Harman, G. 15, 169 n. 21
of desire, intention, and disgust 182 Hartmann, J. 133 n. 4
Edgeley, R. 4 n. 7 Hellie, B. 4 n. 8, 77 n. 3
Ekman, P. 173 n. 26 Hilbert, D. 133 n. 4, 139 n. 13
Evans, G. Hill, C. 149 n. 28
and belief-independence 143–4 Hodges, J. R. 185 n. 6
and immunity to error through Hoffman, J. E. 187 n. 9
misidentification 66, 67 Holland, R. F. 197
and knowing one lacks a belief 117 Holton, R. 119 n. 21, 169 n. 21
and the self 94 n. 23 Humberstone, L. 104 n. 7
and transparency 3–4, 21–2, 50 n. 1, 57–8, Hume, D. 17–18, 31, 93–6, 187–8, 191, 197
60–1, 88, 102, 103, 111, 112, 114, 116, Hurlburt, R. T. 205, 206 n. 26, 207
124, 128, 129–30, 142–3, 156, 159 Hyman, J. 154 n. 33
Everson, S. 17 n. 27 hypnosis 172 n. 24
evidence, see E=K
external world skepticism 19–21, 208 IMAG-DUCK 196
externalism about mental content 10–11, IMAG-DUCK-DAB 197
29–30, 35–6 imagery debate 193 n. 15
extravagance 14, 16, 28, 32, 48, 116, 158 immunity to error through misidentification
31, 66–70
Fallon, A. E. 174–5, 180 n. 37 infallibility 7, 11, 13, 34, 111
Falvey, K. 16 n. 24, 60 n. 16 inference
Fara, D. G. 166 n. 16 and Boghossian’s paradox 31
Farkas, K. 8 n. 14 and reasoning 14–15, 100–1
Fernández, J. 5, 48 n. 28, 50 n. 1, 167 n. 18 from world to mind 23
Fernyhough, C. 203 n. 23, 205–6 inferential account of self-knowledge,
Fine, G. 20 n. 35 defined 14–15
Finkelstein, D. 15 n. 23, 16 n. 24, 40–2, 157 n. 2 INT 169
OUP CORRECTED PROOF – FINAL, 21/3/2018, SPi

INDEX 

Intraub, H. 187 n. 9 Mercier, H. 100–1 n. 1


interoception 147 MERE-IMAG-DUCK-DAB 198
Metcalfe, J. 1 n. 1, 118, 120
Jack, R. E. 173 n. 27 Miller, W. I. 174 n. 30, 176 n. 32
Jackson, F. 143 n. 21 mindreading 12 n. 18, 16 n. 26, 28
James, W. 24, 207 n. 28 Mogensen, J. 38 n. 16
Jellinek, J. S. 192 n. 14 Moltmann, F. 30 n. 7
Johnston, M. 150 n. 29, 154 Moore, G. E. 2–3, 4 n. 8, 17 nn. 28–9, 97;
Jones, S. R. 203 n. 23 see also Moore’s paradox
Moore’s paradox 39, 45–7, 113
Kant, I. 9, 18, 20–1 Moran, R. 5, 16, 50, 63 n. 19, 102, 123, 127, 156,
Kaplan, D. 104 n. 5 159, 167 n. 18
Katsafanas, P. 18 n. 33 and inner sense 38–40, 48 n. 28
Kind, A. 47 n. 26 and puzzle of transparency 4, 77, 79–83, 100
Kitaoka, A. 143 n. 21 and rational agency and self-constitution
KK principle 10 n. 17, 116 57–62
KNOW 116, 128, 140, 183 and rule-following 102 n. 3
Kosman, L. A. 17 n. 28 and uniformity assumption 50, 157
Kosslyn, S. M. 186, 193 n. 15 Morin, A. 208 n. 29
Kripke, S. 67–9, 77, 93–4, 196
Kunda, Z. 1 n. 2 Nagel, T. 160 n. 10
nausea 173–4, 176–8, 179–80
Laeng, B. 186 necessitation, rule of 103–4
Langdon, R. 206 n. 26 neo-Ryleanism, defined 12 n. 18; see also
Langland-Hassan, P. 203 n. 23 Ryleanism; Ryle, G.
Langton, R. 21 n. 36 Newell, B. R. 13 n. 19
language of thought 115, 200–1 Nichols, S. 16, 25 n. 1, 48 n. 29, 115–16, 158
Lashley, K. 10 Nisbett, R. E. 13
Lawlor, K. 167 n. 18 nociceptors 148
Lepore, E. 54, 55 n. 12 NODES 166
Lichtenberg, G. C. 93, 94 non-inferential knowledge 19, 106, 125; see also
Locke, J. 9, 18, 96 avowals
Loftus, E. F. 186 n. 7 Noordhof, P. 154 n. 33
Ludwig, K. 54, 55 n. 12 NOTBEL 118
luminosity 7 n. 11, 38 NOVIEW 118
Lycan, W. G. 25 n. 1, 90 n. 19, 140
Lyons, J. 10 n. 16, 33 n. 8 object perception model 27–8, 32
O’Callaghan, C. 139 n. 14, 140
MacKay, D. G. 199 n. 20, 206 n. 26 Okada, H. 199 n. 20
Mackie, J. L. 179 olfactory world 139–40, 185
Manley, D. 121–2 n. 22 Overgaard, M. 38 n. 16, 208
Martin, M. G. F. 150 n. 29
Matsuoka, K. 199 n. 20 Pacherie, E. 146 n. 25
Matthen, M. 139 n. 14 PAIN 149
McCarthy-Jones, S. 206 n. 26 PAIN-FOOT 149
McDowell, J. 16 n. 24, 42–3, 51 n. 3, 53 n. 5, 178 parenthetical use of ‘believe’ 63 n. 19,
McFetridge, I. 9 n. 15 169 n. 22
McGinn, C. 9 n. 15, 187 n. 10 Pasnau, R. 17
McHugh, C. 116 n. 18 Paul, S. K. 167 n. 19, 169 nn. 21–2, 171 n. 23, 172
McKinsey, M. 4–5, 8–11, 51 Pautz, A. 136 n. 10
MEM-DUCK 194 Peacocke, C. 15 n. 22, 44 n. 24, 60 n. 16, 136 n. 10,
MEM-DUCK-DAB 194 139 n. 12, 146
memory peculiar access
episodic 141 n. 17, 184–95 defined 5
procedural 184 explained for belief 108–9
semantic 183–5, 207 n. 28 explained for desire, intention, and
mental imagery 7 n. 10, 185–208 disgust 181–2
OUP CORRECTED PROOF – FINAL, 21/3/2018, SPi

 INDEX

Peels, R. 7 n. 10 and speech, inner or unstudied 183, 198–9,


Perky effect 186, 187, 199 n. 20 200, 201–3
reverse-Perky effect 187 and transparency 128, 129 n. 1, 135
Perky, C. W. 186–7, 187 nn. 8–9, 199 n. 20 Ryleanism
Perl, E. R. 148 defined 12
Perner, J. 172 n. 24 and detectivism 16
p-fact 149 and economy 14
Philips, I. 208 and transparency account 204
phonological imagery 199–201 see also neo-Ryleanism; Ryle, G.
Pitcher, G. 149 n. 28, 151–4
Pitt, D. 198 n. 19 safety 106–8, 109–10, 116–17, 121–2 n. 22, 130,
Plato 16, 183 140, 142, 161–2, 178, 194–5
Price, H. H. 145 Salow, B. 116 n. 18
privileged access Samoilova, K. 157 n. 3, 169 n. 22
defined 5 Sartre, J. P. 17 n. 28
explained for belief 109–12 Scanlon, T. 158–9
explained for desire, intention, and Scarry, E. 89 n. 17
disgust 181–2 Schellenberg, S. 150 n. 29
probability 52–3, 105, 119–21, 131 Schiffer, S. 115
proprioception 48, 67, 84–5, 147, 150, 176 Schueler, G. F. 160 n. 10
Putnam, H. 29 Schwitzgebel, E. 7 n. 10, 97 n. 25, 157 n. 3,
207 n. 27
rational agency, see Moran, R. Searle, J. R. 135–7
reasoning, see inference secondary quality account of color 180
Recanati, F. 136 n. 9 SEE 139
Reid, T. 88–9, 92 SEE-DUCK 189
REMEMBER 183 Segal, S. J. 186
response-dependence 15 n. 23, 178–81 self
Robins, R. W. 141 n. 17 -blindness 43–7, 49, 78, 133 n. 5, 158
Robinson, T. 167 n. 17 -constitution, see Moran, R.
Roche, M. 158 n. 4 -intimation 7–8, 37–8, 42 n. 20, 44, 102, 179
Rosenthal, D. 48 n. 28 -telepathy 39
Rozin, P. 174–5, 176, 180 n. 37 -verifying, see avowals; rules, self-verifying
rules Sellars, W. 70
epistemic 100–1 Setiya, K. 124 n. 23, 167 n. 19, 169
following 101–2 s-fact 199, 200
good and bad 102 s–-fact 199
neutral 102 Shah, N. 127 n. 25
practically self-verifying 140 Shanks, D. R. 13 n. 19
schematic 102 Shergill, S. S. 206
self-verifying 104 Sherrington, C. S. 148
strongly practically self-verifying 162 Shoemaker, S. 80, 87
strongly self-verifying 107 and detectivism 16
trying to follow 107 and economy 14
see also BEL; BEL-3; CONFIDENCE; DES; DIS; and immunity to error through
IMAG-DUCK; IMAG-DUCK-DAB; INT; KNOW; misidentification 31, 66, 67
MEM-DUCK; MEM-DUCK-DAB; MERE-IMAG- and inner sense 26–8, 30–1, 32, 43–7, 48–9
DUCK-DAB; NODES; NOTBEL; NOVIEW; and self-blindness 43–7, 49, 158
PAIN; PAIN-FOOT; REMEMBER; SEE; SEE-DUCK; and transparency 3, 61, 121 n. 22, 161 n. 11
THINK; THINK-ALOUD; THINK-IMAG-DUCK; see also Moore’s paradox; self, -blindness
THINK-THAT Siegel, S. 136 n. 10, 137 n. 11
Russell, B. 16 n. 25, 25 n. 1, 97–8 Simons, M. 63 n. 19
Ryle, G. 3 n. 4, 7 n. 11, 21–2, 24, 43, 51 n. 4, 135 Smart, J. J. C. 89
and neo-Ryleanism 12 n. 18 Smith, A. D. 145, 146 n. 25
and peculiar and privileged access 5, 9–10, Smith, M. 160 n. 10, 163 n. 14
11–13 Smithies, D. 110 n. 12, 121 n. 22
and Ryleanism 12 Soames, S. 64 n. 20
OUP CORRECTED PROOF – FINAL, 21/3/2018, SPi

INDEX 

Sosa, E. 106, 117, 118 uniformity assumption 157–8


Sperber, D. 100–1 n. 1 unsupported self-knowledge 51, 124, 125, 178
Squire, L. R. 184
Stanley, J. 184 n. 3 Valaris, M. 104 n. 7, 121 n. 22
Stich, S. 16, 25 n. 1, 48 n. 29, 115–16, Van Cleve, J. 144 n. 24
156–7 Vazire, S. 1 n. 1
Stoljar, D. 110 n. 12, 117 n. 19, 121 n. 22 Velleman, D. J. 127 n. 25, 169
Stone, T. 146 Vendler, Z. 1
Stroud, B. 21 n. 37 v-fact 136–41
suppositional reasoning 15 n. 21, 81 n. 10, v–-fact 189
121 n. 22 visual world 139
Sutin, A. R. 141 n. 17 visualized world 188–9, 190, 193, 195–6

Ten Elshof, G. 25 n. 1, 26 n. 2 Warfield, T. 108 n. 9


THINK 202 Williamson, T. 2, 14, 106, 110 nn. 12–13, 119 n.
THINK-ALOUD 201 21, 116, 184 n. 3, 208 n. 30
THINK-IMAG-DUCK 204 Wilson, M. 17 n. 30
THINK-THAT 204 Wilson, T. D. 1 n. 1, 11–13
Thöle, B. 50, 54 Wittgenstein, L. 42 n. 21, 63, 66, 192
Thomas, N. J. T. 187 n. 8 Kripke’s Wittgenstein 77, 93–4
Thomasson, A. 3 n. 5 and transparency 2–3, 74, 99
Thomson, J. J. 159 n. 6 Woodworth, R. S. 207 n. 28
thought insertion 203 n. 23 world
Titchener, E. B. 96–7 of imagined speech 205
toothache 67, 74, 88–94 of inner speech 199, 202, 205
transforming, of content 141, 187–90, 195, of pain 149–51, 155
199, 208 see also auditory world; olfactory world;
transparency visual world; visualized world
account 22, 108, 115, 208 Wright, C. 3 n. 4, 15 n. 23, 16 n. 24, 26,
condition 58, 79–82 33–5, 37, 38, 42–3, 55 n. 12, 173 n. 25
procedure Wu, W. 206 n. 26
as an inference 15, 75
introduced 4 Young, A. 146
Tulving, E. 184, 185 n. 6, 190
Tye, M. 136 n. 10, 149 n. 28, 150 n. 29, Zatorre, R. J. 199 n. 20
154 n. 33, 193 n. 15 Zeman, A. 208 n. 29
twin earth, see externalism about mental content zombies 83–7, 90–2, 133 n. 5

Das könnte Ihnen auch gefallen