24 April 2017, CoSaQ seminar, Fabian Schlotterbeck
One particularly interesting fact about the processing of quantifiers is that this seemingly uniform class of expressions exhibits considerable variation in how difficult they are to process. Several approaches to characterizing the processing difficulty of individual quantifiers have emerged in recent years. With regard to verification – as opposed to comprehension, for example – the most influential are (1) automata theoretic approaches, (2) logical-form based approaches and (3) approaches based on the interface transparency thesis, which posits a transparent relationship between semantic representations and the verification procedures that are actually realized within the cognitive architecture. In my talk, I will evaluate these approaches and their predictions against experimental data from the literature as well as from my own work. I will focus on two specific test cases. The first are quantified reciprocal sentences, like, e.g., ‘most dots are directly connected to each other,’ which, depending on the quantifier, may differ in computational complexity (Szymanik 2010). The second are upward vs. downward entailing comparative modified numerals, like, e.g., ‘more’ vs. ‘fewer than five,’ which are commonly observed to differ in processing difficulty (Koster-Moeller, Varvoutis & Hackl 2008). Concerning the latter, a processing model for sentence-picture verification is proposed that integrates key insights of the above-mentioned approaches. Moreover, experimental data are presented that confirm predictions of this integrated processing model.