Sie sind auf Seite 1von 15

Generalized Quantiers, Semantic Automata, and Numerical Cognition

Shane Steinert-Threlkeld February 28, 2012

Generalized Quantiers
every, some, no, at least three, most, Johns three, between ve and ten, less than half, . . .

Natural language determiners include expressions such as:

The semantics of such terms has been a subject of much research over the past three decades. In this brief introduction, I will develop just the amount of formal apparatus needed to analyze some experimental results on how we process basic sentences containing natural language determiners. For more complete summaries, see [1, 26, 27, 14, 15]. We will focus on natural language determiners that take in two predicates as arguments. The denotations of these will be generalized quantiers (of type 1, 1).1 These are functions Q:E P E P E

from universes to pairs of properties of that universe. We typically write QE instead of QE to emphasize the role of the two properties. For example, everyE someE at least threeE mostE an even number ofE

A, B A, B A, B A, B A, B

A A A A A

B B B B B

AB 2n, for some n N

Because the space of possible generalized quantiers is enormous, a natural pursuit is to seek constraints on these denotations that all natural language determiners may have. While the exploration of these potential constraints lies beyond the scope of the present paper, I will explain a few major constraints that will play a pivotal role in the formal development of semantic automata in the
1 In general, a quantier can have type k , . . . , k , where each k represents the arity of n 1 i the ith predicate to which the quantier applies. So type 1 quantiers apply to basic oneplace predicates (sets of objects) and the type 1, 1 quantiers currently under consideration apply to pairs of one-place predicates.

next section. Three such constraints are Conservativity (CONS), Isomorphism (ISOM), and Extension (EXT):2 CONS QE AB i QE AA

ISOM QE AB i QE A B where E EXT QE AB i QE AB for every E

E and : E

E is a bijection

Intuitively, CONS states that whether QE AB holds depends only on entities belonging to A. For this reason, the rst argument to a generalized quantier is often called the scope argument. ISOM is an invariance principle that states that a quantier depends only on the relationship between the sets A and B and not on properties of any individual members therein. EXT is a kind of invariance principle for contexts: enlarging the context in which the relevant subsets exist does not aect whether these sets belong to a quantier.3 These three constraints, in particular, have the eect that whether sets A and B belong to a quantier Q satisfying them depends only on the cardinalities AB and A B . In other words, we can represent an element of a quantier Q satisfying CONS, ISOM, and EXT as a pair of numbers. This ability will feature prominently in the design of our semantic automata. This ability also motivates a very nice geometric interpretation of such quantiers; for more information, see [26, p. 27]. The intuitive ideas mentioned in the previous paragraph and the representation mentioned in the present paragraph can be catalogued in the following technical results: Lemma 1. A quantier Q satises CONS + EXT i QE AB

 QA AA

This lemma generalizes to n-ary Q, in which case we take Ai where A appears here. In this paper, however, we consider only binary quantiers. Using this lemma, we can then prove the following result. Theorem 1. A binary quantier Q satises CONS, ISOM, and EXT i for A B and A B every E, E and A, B E, A , B E , if A B B , then Q AB  Q A B . A E E

Proof. Suppose Q satises CONS, ISOM and EXT. If A B A B B , then we have bijections between the set dierences and A B A and intersections which can be composed to generate a bijection from A to A . Therefore, by ISOM, QA A B  QA A A B . Then, by lemma 1, QE AB  QE A B . Going the other direction, ISOM follows immediately. Letting E A A and B A B, the assumption yields QE AB  QA AA B which, by lemma 1, is equivalent to CONS+EXT.

2 Van Benthem [26] refers to ISOM as QUANT. I adopt the former name because it is more standard; it turns out, however, that under EXT, ISOM is equivalent to another properly usually referred to as QUANT. 3 For reasons that go beyond the scope of the paper, both van Benthem and Westerstahl call quantiers satisfying these constraints logical. For an interesting new approach the question of logicality of quantiers, see [9].

This theorem formalizes the idea that whether two sets A and B belong to a quantier satisfying the three relevant constraints depends only on the cardinalities A B and A B . This result motivates the introduction of a binary relation Qc between cardinal numbers corresponding to each such Q. This relation is (well-)dened by Qc xy E

A, B

EQE AB and A B

x, A

We can look at our earlier list of generalized quantiers in these terms:

x y at least threec xy  y E mostc xy  y E c an even number ofE xy  y


c everyE xy somec xy E

0 0 3 x

2n for some n N

In the next section, we will see how these quantiers correspond to words in a 2-letter alphabet and therefore to dierent kinds of automata according to the language dened by the words accepted.

Semantic Automata

In light of the results at the end of the previous section, we here provide a computational interpretation of generalized quantiers. Given a set A, enumerate its members, assigning to each the number 0 if it belongs to A B and 1 if it belongs to A B.4 Thus, A corresponds to a word on the alphabet 0, 1, i.e. to an element of 0, 1 . We can recursively dene on 0, 1 two functions #0 and #1 which yield, respectively, the number of 0s and the number of 1s in a given sequence. Now, Q accepts a sequence w 0, 1 just when

#0 w, #1 w Qc
In this way, then, a generalized quantier satisfying CONS, EXT, and ISOM corresponds to a language on the alphabet 0, 1.5 We should also note that the languages accepted by quantiers in this sense are permutation-closed : the order in which the elements of our set A were enumerated does not aect whether or not Q will accept the enumeration.

2.1

Finite State Automata and First-Order Denability

Before proceeding to model particular generalized quantiers by particular automata, an important denition: Denition 1. A generalized quantier Q of type 1, 1 is rst-order denable i there is a rst-order language L and an L-sentence whose non-logical vocabulary contains only two unary predicate symbols P and R such that for any
that A A B A B and A B A B . general, any generalized quantier corresponds to a language on a four-letter alphabet A B . Not where the letters correspond to membership in A B, A B, B A, and E only do languages on two-letter alphabets have technical advantages and easier exposition, but the three constraints considered have independent motivations as alluded to earlier.
5 In 4 Note

model M

M, P, R,
QM P R  M, P, R

This denition extends in an obvious way to quantiers of arbitrary types For present purposes, however, that level of generality is unnecessary. We say that a quantier is higherorder denable if it is not rst-order denable. In particular, all, some, and at least three are rst-order denable:

k1 , . . . , kn and logics other than rst-order logic.  M, A, B x Ax Bx  M, A, B x Ax Bx at least threeM AB  M, A, B x y z x y


allM AB someM AB

Ax

Bx

Ay

By

Az

Bz

Importantly, most, an even number of, and an odd number of are only higherorder denable. The proofs of such non-denability results are quite complex and lie beyond our present scope.6 We now begin to show the connection between rst-order versus higher-order denability and dierent classes of automata.7 The simplest class of automata is the (deterministic) nite-state automata: Denition 2. A deterministic nite-state automata (dfa) is a 5-tuple Q, , , q0, F where 1. Q is a nite set of states 2. is a nite set of input symbols 3. : Q is a transition function

4. q0 is the start state 5. F Q is the set of accepting states A dfa can be represented graphically. Each state q Q corresponds to a node. We will often represent these as circles, omitting the name of the node. Final states in F will be represented as double circles. If q, a p, we draw a directed arc from q to p labeled a. Note that in the case of our semantic automata, we will always use 0, 1 (though, again, see footnote 5). c Recall that everyM xy  x 0. Therefore, the only words w 0, 1 that we want to accept are those where #0 w 0. So the automaton for every will start in an accepting state and stay there so long as it only reads 1s. As soon as it reads a single 0, however, the automaton moves to a non-accepting state and remain there. We can represent every by the dfa in gure 1. a, b, B a, b, c. In this case, A will be Consider a toy example: A represented by the string 11 and the dfa for every will start and stay in the a, c, then A will be represented accepting state. If, on the other hand, B by the string 10. Upon reading the 0, the dfa for every will move to the nonaccepting state and end there. 0. Therefore, a dfa for some should accept Recall that somec xy  y M any word w 0, 1 which contains any non-zero number of 1s (i.e. such that

0 0 1

Figure 1: A nite state automaton for every.

0 0 1 1
Figure 2: A nite state automaton for some. #1 w 0). The reader can verify that the dfa depicted in gure 2 will do just that. One can prove that all rst-order denable quantiers can be simulated by dfas. Motivated by this result, one might hope that only rst-order denable quantiers can be so simulated. It turns out, however, that some higher-order denable quantiers can also be modeled by nite state automata. Figure 3 shows such an automaton for an even number of. Making the state on the right

1 1

Figure 3: A cyclic nite state automaton for an even number of. an end-state renders gure 3 a (cyclic) nite state automaton for an odd number of.8
see [27]. Similarly, [2] contains very technical results in this area. the classic reference on automata theory, see [13]. 8 Note that an odd number of is the complement of an even number of ; in other words, this denotation is in some sense the negation of the denotation of an even number of. The extent to which semantic automata can be composed is an interesting and important question.
7 For 6 But

It must be noted, however, that this automaton contains a non-trivial loop between the two states. It turns out, moreover, that this loop is essential, as the following result shows. Theorem 2. A quantier is rst-order denable i it can be recognized by a permutation-invariant acyclic nite state automaton. Proof. See [26, p. 156-157].

2.2

Pushdown Automata and Higher-Order Denability

Moreover, it is not the case that all higher-order quantiers can be simulated by cyclic nite-state automata. To simulate quantiers such as less than half, we must move to the next level of the machine hierarchy, namely to pushdown automata.9 Intuitively, a pushdown automaton augments a dfa with a stack of memory; this stack is a last-in/rst-out data structure, onto which we can push new content and from which we can pop the topmost element.10 The following denition renders these intuitions precise. Denition 3. A pushdown automaton (PDA) is a seven-tuple Q, , , , q0, Z0 , F where 1. Q is a nite set of states 2. is a nite set of input symbols 3. is a nite stack alphabet 4. : Q

P Q is a transition function

5. q0 is the start state 6. Z0 is the start symbol 7. F is the set of accepting states The biggest dierence between dfas and pdas lies in the transition function . The idea is that receives as input the current state, the symbol most recently read, and the symbol at the top of the stack. An output pair p, indicates that the automaton has moved to state p and replaced the top of the stack with the string . In particular, suppose X is the symbol at the top of the stack. If (here denotes the empty string), then the top of the stack has been popped. If X, then no change has been made. If Y Z, then X has been replaced with Z and Y has been pushed onto the stack. While the denition of a pda allows for any length string to be pushed, we will work only with strings of length 2. p, by a directed arc from q to p Graphically, we represent q, a, X labeled by a, X . Here X is intended to evoke that X has been replaced
For the development of composition of rst-order denable quantiers, see 4 of [4]. 9 In particular, we are moving one level up the Chomsky hierarchy [3] of formal grammars. Precisely the regular languages are generated by dfas while the context-free languages are generated by pushdown automata. That quantiers such as most and less than half are not computable by dfas follows from the pumping lemma for regular languages. 10 Pictorially, this resembles a stack of plates on a spring-loaded dispenser in a cafeteria.

by at the top of the stack. We use as a wildcard in order to consolidate multiple labels. In all of the following examples, we take . With the denition of a pda at hand, we can build a pda to simulate an even number of ; see gure 4. The idea is that we push 1s onto an empty stack as theyre read and pop a 1 from the stack when we read a new 1. (We do nothing as 0s are read since they have no eect.) If an even number of As are Bs is 2n. So after A has been completely processed, the stack true, then #1 A should be empty, as indicated by the transition to the accepting state.

0, 1, Z0 1 1, 1

, Z0 Z0

Figure 4: A pushdown automaton for an even number of. Changing the label on the edge to the nal state to , 11 would convert this to a pushdown automaton for an odd number of.11 Figure 5 depicts a pda for less than half. The idea here is that we push 1s and 0s to the stack as often as we can, but popping o pairs. If a 1 is read and a 0 is on the stack, we pop the 0 and vice-versa. This has the aect of pairing the members of A B and A B. Because less than half holds when the former outnumber the latter, there should be only 0s on the stack at the end of this process. Therefore, the transition to the accepting state occurs only when the string of 0s and 1s has been entirely processed and popping o all 0s exhausts the contents of the stack. Changing the labels on the edges of the nal two state to pop all 1s would render this a pda for most.12 To understand exactly the quantiers which can be simulated by a pda, we need one more notion. Denition 4. A quantier Q is rst-order additively denable if there is a formula in the rst-order language with equality and an addition symbol such that Qc ab  N, , a, b a, b. M It turns out that this notion exactly coincides with simulatability by a pda.
11 Representing the compositionality of higher-order quantiers via semantic automata seems to be less investigated than in the rst-order case discussed in note 8. Useful results that I am unaware of may exist in the computation theory literature. 12 It might be argued that this pda corresponds to what Pietroski et al [20] call a OneToOnePlus strategy. An interesting avenue that will not presently be explored would be to look at whether their experimental methods can be used to distinguish which automaton is used to process a given quantier. Most, for instance, has multiple automata which accept the same language. Our present focus will be on distinguishing which automaton is used to process a quantier when automata from dierent classes can be used to comprehend the quantier.

0, 0 1, 1 1, 0 0, 1 0, 000 1, 111 , 0

, 0 , Z0 Z0

Figure 5: A pushdown automaton for less than half. Theorem 3. A quantier Q is computable by a pda i it is rst-order additively denable. Proof. See [26, p. 163 - 165] It should be noted that while the proof of this theorem is quite complicated, it does two things: provides a recipe to construct a pda for any rst-order additively denable quantier (by representing these as semi-linear sets) and does so without our crutch of using the $ to indicated end-of-string. The automata for quantiers such as at least one-third become quite complicated quite quickly; we therefore refer the curious and technically-minded reader to the above citation. For present purposes, the key point of these considerations is that some quantiers can only be computed by machines augmented with a form of memory. Intuitively, to judge whether a pair of predicates belong to one of these quantiers, one must keep track of the cardinality of one predicate while assessing the other. As we will see in the next section, this formal distinction between classes of quantiers manifests itself both behaviorally and neurologically.

Experimental Results on Quantier Comprehension

In this section, I will survey some experimental results on the processing of generalized quantiers. My goal will be to provide enough experimental evidence to be able to discuss in what way the semantic automata just described can be said to accurately model how humans process generalized quantiers. First, I will discuss an imaging study by McMillan et al. [16] which attempts to distinguish the neural basis for rst-order and higher-order quantier comprehension.13 After discussion some consequences and limitations of this work, I will move to a
13 I will also draw on a similar study using patients with corticobasal degeneration [17] and the more general discussions of these studies [6, 5].

set of behavioral studies by Szymanik and Zajenkowski [24] which attempt to distinguish the memory demands between quantiers which can be computed by cyclic nite state automata and those which genuinely require pushdown automata.

3.1

Imaging Results

In [16], McMillan et al. use fMRI to test the hypothesis that all quantiers recruit inferior parietal cortex associated with numerosity, while only higherorder quantiers recruit prefrontal cortex associated with executive resources like working memory. To do so, each of 12 healthy right-handed adults (native English speakers) was presented with 120 grammatically simple propositions using a quantier to ask about a color feature of a visual array. The 120 propositions consisted of 20 instances of 6 dierent quantiers: 3 rst-order (at least 3, all, some) and 3 higher-order (less than half, an odd number of, an even number of ). Each subject was rst shown the proposition alone on a screen for 10s, then the proposition with a visual scene for 2500ms, then a blank screen for 7500ms during which they were to assess whether the proposition accurately portrayed the visual scene. During these assessments, subjects were outtted with a GE head coil to record fMRI data. After aligning all of the image data to a standard neuroanatomical atlas [25], subtraction techniques were used to isolate dierences in activation during the processing of higher-order and rst-order quantiers. These activations were analyzed only for correct judgments (whether or not the proposition truly or falsely described the visual array). The results run as follows.14 Behaviorally, there was statistically signicant dierence in the accuracy of judgments with higher-order quantiers (mean: 84.5%) and those with rst-order quantiers (mean: 92.3%). In terms of the imaging data, right inferior parietal and bilateral anterior cingulate activation was seen in both the rst-order and higher-order cases. In addition, however, higher-order quantiers recruited right dorsolateral prefrontal, bilateral inferior frontal cortices, and thalamus. The higher-order minus rst-order image analysis revealed signicant activation bilaterally in both dorsolateral prefrontal and inferior frontal cortices. These results lead McMillan et al. to make two conclusions: 1. All generalized quantiers have in common number knowledge and activation of inferior parietal cortex. 2. Working memory and prefrontal activation occur only during higher-order quantier comprehension. Conclusion (1) follows from previous results linking number knowledge and inferior parietal cortex. Moreover, because this study found no activation in periSylvian regions, the authors conclude that verbal mediation is not required for precise numerical assessments.15 It should also be noted that the study found
14 Note that I am here avoiding some details. In particular, the study also analyzed the recruitment of precise vs. approximate number sense by using visual arrays for at least three where the target number was near three and others where the target number was more distant. 15 See [8, 7] and many references therein for arguments in favor of verbal mediation of precise number.

no signicant dierence in activation between judgments involving at least three where the relevant class had cardinality near or distant to three, supporting the role of precise number sense in all such cases. Conclusion (2) follows from previous studies linking inferior frontal and dorsolateral prefrontal cortex with working memory. The study concludes thusly [16, p. 1735]: We have demonstrated support for anatomical dierences in processing rst-order and higher-order quantiers. This dierence is consistent with a model of quantier comprehension implicating number knowledge for all quantiers, and WM [working memory] only for higher-order quantiers. These ndings honor the distinction between rst-order and higher-order quantiers posited by linguists and logicians, and provide neuroanatomic constraints on the constituents of quantier knowledge. In light of the present discussion, McMillan et al. seem to indicate that the structural distinction between quantiers computable by nite state automata and those by pushdown automata manifests itself neuroanatomically. Moreover, they seem to indicate that working memory is recruited in the processing of all higher-order quantiers. Recall, however, that two of their three instances of higher-order quantiers (an even number of and an odd number of ) can be computed by cyclic nite state automata. The present discussion, then, motivates the following generalization: HOQM All higher-order quantiers recruit working memory when being processed. In other words, their comprehension should be modelled by pushdown automata. First and foremost, this generalization may be quite weak. We saw that, for instance, most cannot be modelled by even a cyclic nite state automaton. In order to fully test this hypothesis, it would be useful to know the full extent of higher-order quantiers computable by cyclic nite state automata. These quantiers could then be used in further experimental studies to test whether working memory is solicited in their comprehension. Such a complete classication was given by Mostowski [18, 19]: Denition 5. For each n given by

N, the divisibility
Dn E

quantier (of type 1) Dn is

In other words, Dn A states that the number of As is divisible by n. Theorem 4. Finite state automata accept the class of all quantiers (of type 1, . . . , 1) denable in F OLDn , i.e. in rst-order logic augmented with every divisibility quantier.

3.2

Behaviorally Distinguishing Parity and Majority Quantiers

Szymanik [21] points out what was discussed at the end of the previous section, namely that McMillan et al. treated all higher-order quantiers as one 10

class instead of distinguishing between those computable by cyclic dfas and those requiring pdas. Szymanik believes that HOQM is as yet unsubstantiated and so designs two behavioral experiments (reported in [22, 23, 24]) to test whether or not the dfa/pda distinction is more psychologically real than the rst-order/higher-order distinction. In a preliminary study [22], reaction time was measured for eight dierent quantiers belonging to four groups: Aristotelian: all, some Parity: an even number of, an odd number of Cardinal of high rank: more than seven, less than eight Proportionality: at least half, less than half Forty native Polish speakers16 were given 15.5s to assess a proposition and a simultaneously-displayed visual array. Szymanik and Zajenkowski hypothesize that reaction time in this task should correlate with the complexity of the minimal automaton required to process the quantier. Because the high-rank cardinal quantiers require dfas (acyclic) with many more states than the cyclic dfas of the parity quantiers (which have only two states), the expected ordering of reaction times is exactly the order of the list above. Indeed, Szymanik and Zajenkowski found statistically signicant increase in reaction time along that chain of complexity. This result provides prima facie evidence that the relevant constraint on quantier comprehension is complexity of the minimal automaton and not (simply) whether the quantier is rst- or higher-order denable. To more fully test the demands of working memory in parity versus proportionality quantiers, a more complex study was designed [24]. 85 native Polish speakers (40 male, 45 female) were asked to complete two tasks. First, the participants were shown a sequence of either 4 or 6 digits which they were to remember. Following that, a 15.5s quantier problem followed. This problem consisted of a proposition and a visual array displayed for 15s followed by a blank screen for 500ms.17 The subjects were asked to decide whether the proposition accurately described the visual array as soon as possible. Propositions were built out of 4 quantiers: 2 parity quantiers (DQs; an odd number of, an even number of ) and 2 proportional quantiers (PQs; less than half, more than half ). Each quantier was presented in 8 trials for a total of 32 trials per subject. At the end of each quantier problem, the subjects were asked to recall the sequence of digits seen before the beginning of the quantier problem. A correct answer was considered to be both a correct judgment of the quantier problem and full recall (right digits in right order) of the string of digits. In both the 4-digit and 6-digit cases, the proportionality quantiers exhibited both increased reaction time and decreased accuracy when compared with the parity quantiers. Between the 4-digit and 6-digit cases, there was no signicant dierence in the accuracy on parity quantier judgments while accuracy on proportional quantier judgments actually increased. On the digit recall task, 6-digit strings were recalled with lower accuracy than 4-digit strings and with
16 It should be noted that the grammar of the Polish sentences used directly mirrors that of the English ones presented here. 17 The paper does not report how long the subjects were given to memorize the sequence of digits.

11

no dierence between which type of quantier assessed. On the other hand, 4digit strings were recalled with signicantly lower accuracy when proportionality quantiers as opposed to parity quantiers were assessed. Because proportionality quantiers take longer to process under additional memory load, Szymanik and Zajenkowski conclude that these quantiers make stronger demands on working memory than parity quantiers. Therefore, the additional memory load interferes with the processing thereof to a greater extent than in the parity case. It should be noted that the equal level of accuracy in 6-digit recall is explained by the alleged fact that the task was too dicult and subjects simply gave up; this also allegedly explains the increase in accuracy of the proportionality quantiers. While the results of these two studies are compelling, a few issues remain unresolved (and unacknowledged). With regards to the rst study, an important omitted class of quantiers is cardinality of low rank (at least three, at most four, etc.). These quantiers have dfas with only three or four states. Moreover, because the actual reaction times found in the original study were relatively close, it may turn out that reaction times with these quantiers are faster than with the parity quantiers. We might end up with an ordering: Aristotelian, cardinality of low rank, parity, cardinality of higher rank, proportionality. Were such an ordering to arise, Szymanik would need to provide a more thorough explanation of the notion of complexity of the minimal automaton required to compute a quantier. Clearly, a two-state acyclic automaton is less complex than a two-state cyclic one. But it is not immediately clear what would make a three- or four-state acyclic less complex than a cyclic two-state while a sevenor eight-state acyclic is more complex. With regard to the second study, the behavioral data do not rule out that the parity quantiers place demands on working memory but to a lesser degree than the proportionality quantiers. In particular, if the parity quantiers still solicit working memory but just to a lesser degree than the proportionality ones, one would still expect them to exhibit lower reaction times (and arguably increased accuracy as well). For instance, the pda presented for an even number of had fewer states and required less memory than that for less than half. In other words, this purely behavioral data does not distinguish between memoryless and memory-soliciting processing of parity quantiers. Therefore, this data does not directly support or refute the HOQM hypothesis. A very obvious next step would be to conduct an imaging study directly analagous to McMillan et al. but with the parity quantiers and proportionality quantiers analyzed separately. (Of course, more cases of quantiers computable by cyclic dfas and others only computable by pdas can be used.) Such a study could better distinguish the mechanisms used to process parity versus proportionality quantiers and thus shed light on whether these are the same or dierent mechanisms.

Concluding Remarks

I will conclude with a few sketchy remarks about the relationship between language and numerical cognition. McMillan et al. conducted another imaging study using patients with corticobasal degeneration, frontotemporal dementia, and alzheimers disease [17]. This study was able to show a dissociation between

12

precise number sense and language processing, rejecting a strong Whoran hypothesis that language (in particular, number words) constrains numerical cognition.18 In reviewing this data, Clark and Grossman [6] conclude: We cannot rule out the seemingly radical hypothesis that special neuroanatomical structures are required for the computation of partial recursive functions and that it is these structures that led to the uniquely human endowments of expressive language and precise number sense. Given that Hauser et al. [12] have already argued that the ability to compute recursive functions underlies the uniquely human aspects of language and communication, perhaps this hypothesis is not so radical after all. If recursion can also provide a single mechanism to explain both expressive language and precise number sense,19 then the dissociation between language and number could still be explained by a common mechanism underlying both.

References
[1] Jon Barwise and Robin Cooper. Generalized Quantiers and Natural Language. Linguistics and Philosophy, 4(2):159219, 1981. [2] Jon Barwise and Solomon Feferman, editors. Springer Verlag, New York, 1985. Model-Theoretic Logics.

[3] N Chomsky. On Certain Formal Properties of Grammars. Information and Control, 2(2):137167, June 1959. ISSN 00199958. doi: 10. 1016/S0019-9958(59)90362-6. URL http://linkinghub.elsevier.com/ retrieve/pii/S0019995859903626. [4] Robin Clark. Generalized Quantiers and Semantic Automata: a Tutorial, 2004. URL ftp://www.ling.upenn.edu/facpapers/robin_clark/ tutorial.pdf. [5] Robin Clark. Generalized Quantiers and Number Sense. Philosophy Compass, 6(9):611621, 2011. [6] Robin Clark and Murray Grossman. Number sense and quantier interpretation. Topoi, 26(1):5162, April 2007. ISSN 0167-7411. doi: 10. 1007/s11245-006-9008-2. URL http://www.springerlink.com/index/ 10.1007/s11245-006-9008-2. [7] S Dehaene, E Spelke, P Pinel, R Stanescu, and S Tsivkin. Sources of Mathematical Thinking: Behavioral and Brain-Imaging Evidence. Science, 284:970974, 1999. doi: 10.1126/science.284.5416.970. [8] Stanislas Dehaene. The Number Sense: How the Mind Creates Mathematics. Oxford University Press, Oxford, 1997.
18 See [11] for empirical evidence of a people with number-less language who have diculty on precise numerical tasks. See [10] for an elaboration of this study which does show some success in a very particular precise numerical task. 19 It should be noted that many animals primates and birds alike share with human infants an approximate number sense.

13

[9] Solomon Feferman. Which Quantiers are Logical? A combined semantical and inferential criterion. ESSLLI Workshop on Logical Constants, 2011. URL http://math.stanford.edu/$\sim$feferman/ papers/WhichQsLogical(text).pdf. [10] Michael C Frank, Daniel L Everett, Evelina Fedorenko, and Edward Gibson. Number as a cognitive technology: evidence from Piraha language and cognition. Cognition, 108(3):81924, September 2008. ISSN 00100277. doi: 10.1016/j.cognition.2008.04.007. URL http://www.ncbi.nlm. nih.gov/pubmed/18547557. [11] Peter Gordon. Numerical Cognition Without Words: Evidence from Amazonia. Science, 306:496499, 2004. [12] Marc D Hauser, Noam Chomsky, and W Tecumseh Fitch. The faculty of language: what is it, who has it, and how did it evolve? Science, 298 (5598):15691579, November 2002. ISSN 1095-9203. doi: 10.1126/science. 298.5598.1569. URL http://www.ncbi.nlm.nih.gov/pubmed/12446899. [13] John E Hopcroft, Rajeev Motwani, and Jerey D Ullman. Introduction to Automata Theory, Languages, and Computation. Addison Wesley, Boston, 2nd edition, 2001. [14] Edward L Keenan. The Semantics of Determiners. In Shalom Lappin, editor, The Handbook of Contermporary Semantic Theory, number 1989, chapter 2, pages 4163. Blackwell, Oxford, 1996. [15] Edward L Keenan and Dag Westerstahl. Generalized Quantiers in Linguistics and Logic. In Johan van Benthem and Alice ter Meulen, editors, Handbook of Logic and Language, pages 837893. Elsevier, 1997. [16] Corey T McMillan, Robin Clark, Peachie Moore, Christian Devita, and Murray Grossman. Neural basis for generalized quantier comprehension. Neuropsychologia, 43(12):17291737, January 2005. ISSN 00283932. doi: 10.1016/j.neuropsychologia.2005.02.012. URL http://www. ncbi.nlm.nih.gov/pubmed/16154448. [17] Corey T McMillan, Robin Clark, Peachie Moore, and Murray Grossman. Quantier comprehension in corticobasal degeneration. Brain and cognition, 62(3):250260, December 2006. ISSN 0278-2626. doi: 10. 1016/j.bandc.2006.06.005. URL http://www.ncbi.nlm.nih.gov/pubmed/ 16949714. [18] Marcin Mostowski. Divisibility Quantiers. Bulletin of the Section of Logic, 20(2):6770, 1991. [19] Marcin Mostowski. Computational semantics for monadic quantiers. Journal of Applied Non-Classical Logics, 8:107121, 1998. [20] Paul Pietroski, Jerey Lidz, Tim Hunter, and Justin Halberda. The Meaning of Most: Semantics, Numerosity and Psychology. Mind and Language, 24(5):554585, 2009.

14

[21] Jakub Szymanik. A comment on a neuroimaging study of natural language quantier comprehension. Neuropsychologia, 45(9):21582160, May 2007. ISSN 0028-3932. doi: 10.1016/j.neuropsychologia.2007.01.016. URL http: //www.ncbi.nlm.nih.gov/pubmed/17336347. [22] Jakub Szymanik and Marcin Zajenkowski. Comprehension of simple quantiers: empirical evaluation of a computational model. Cognitive science, 34(3):521532, April 2010. ISSN 1551-6709. doi: 10.1111/j.1551-6709.2009. 01078.x. URL http://www.ncbi.nlm.nih.gov/pubmed/21564222. [23] Jakub Szymanik and Marcin Zajenkowski. Quantiers and Working Memory. Lecture Notes in Articial Intelligence, 6042:456464, 2010. [24] Jakub Szymanik and Marcin Zajenkowski. Contribution of working memory in parity and proportional judgments. Belgian Journal of Linguistics, 25(1):176194, January 2011. ISSN 07745141. doi: 10.1075/ bjl.25.08szy. URL http://openurl.ingenta.com/content/xref?genre= article&issn=0774-5141&volume=25&issue=1&spage=176. [25] J Talairach and P Tournoux. Co-planar Stereotaxic Atlas of the Human Brain. Thieme Medical Publishers, New York, 1988. [26] Johan van Benthem. Essays in Logical Semantics. D. Reidel Publishing Company, Dordrecht, 1986. [27] Dag Westerst Quantiers in Formal and Natural Languages. In D Gabahl. bay and F Guenthner, editors, Handbook of Philosophical Logic (vol. 4), pages 1132. D. Reidel Publishing Company, Dordrecht, 1989.

15

Das könnte Ihnen auch gefallen