Scope asymmetry, QUDs, and exhaustification

Similar documents
Blocking Presupposition by Causal and Identity Inferences. Henk Zeevat SFB991,Heinrich Heine University Düsseldorf & ILLC, Amsterdam University

Using Perceptual Context to Ground Language

CS472 Foundations of Artificial Intelligence. Final Exam December 19, :30pm

Mark Scheme. February Functional Skills English. Reading Level 2 E202

June 13, Dear Sir,

Forty-first Session of the Executive Council UNESCO, Paris, 24 June 1 st July 2008

BALLGAME: A Corpus for Computational Semantics

Honest Mirror: Quantitative Assessment of Player Performances in an ODI Cricket Match

Primary Objectives. Content Standards (CCSS) Mathematical Practices (CCMP) Materials

Homework 7, Due March

Percentage. Year. The Myth of the Closer. By David W. Smith Presented July 29, 2016 SABR46, Miami, Florida

Staking plans in sports betting under unknown true probabilities of the event

The Safety Case. The safety case

Foreign Game Description: Three Sticks. Clear Make and succeed in the jumps over all three sticks without failure

steps to designing effective practice

An Application of Signal Detection Theory for Understanding Driver Behavior at Highway-Rail Grade Crossings

RML Example 26: pto. First Try at a PTO. PTO with a table inside

Principles of Knowledge Representation and Reasoning. Principles of Knowledge Representation and Reasoning. Lecturers. Lectures: Where, When, Webpage

Evaluating a Cycling Strategy

Principles of Knowledge Representation and Reasoning

EYE DOMINANCE. You WILL be one of the three. If you re not sure, use the provided test sheet to check.

RoboCup Standard Platform League (NAO) Technical Challenges

A Hare-Lynx Simulation Model

LING 364: Introduction to Formal Semantics. Lecture 19 March 28th

CMSC131. Introduction to Computational Thinking. (with links to some external Scratch exercises) How do you accomplish a task?

swmath - Challenges, Next Steps, and Outlook

Coupling distributed and symbolic execution for natural language queries. Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin

Coaching Applications. Maturational Timing and Swim Performance in Collegiate Female Swimmers

Analysis of performance at the 2007 Cricket World Cup

Probability & Statistics - Solutions

December DRC Case number: Date:

Arbitration CAS ad hoc Division (O.G. Salt Lake City) 02/003 Bassani-Antivari / International Olympic Committee (IOC), award of 12 February 2002

Rules for the Mental Calculation World Cup 2018

1 Teaching Objective(s) *Lesson plan designed for1-2 days Mississippi Mathematics Framework The student will: 2 Instructional Activities

The logics of homophobia

FOURTH GRADE MATHEMATICS UNIT 4 STANDARDS. MGSE.4.NF.3 Understand a fraction a/b with a > 1 as a sum of fractions 1/b.

Lesson A. Nature 49. A. Complete the sentences with words from the box. species habitat predator prey hunt wild tame protect extinct wildlife

The Fox and the Bear

Working with Object- Orientation

Chapter 20. Planning Accelerated Life Tests. William Q. Meeker and Luis A. Escobar Iowa State University and Louisiana State University

Aiming a Basketball for a Rebound: Student Solutions Using Dynamic Geometry Software

Chapter 4 (sort of): How Universal Are Turing Machines? CS105: Great Insights in Computer Science

Review questions CPSC 203 midterm

The role of intangible assets in explaining the investment-profit puzzle

Why Small Sided Games?

To lane-split or not lane-split: Can the UC Berkeley Study help me decide?

DEVELOPING YOUTH FOOTBALL PLAYERS BY HORST WEIN DOWNLOAD EBOOK : DEVELOPING YOUTH FOOTBALL PLAYERS BY HORST WEIN PDF

Cuisenaire Rods. A Guide to Teaching Strategies, Activities, and Ideas

Flies and a Frog. Flies and a Frog. 1 of 11. Copyright 2007, Exemplars, Inc. All rights reserved.

Centre for Education Research and Practice

Measuring Relative Achievements: Percentile rank and Percentile point

Pedestrian Protection. Definition of the Measuring Point. Assessment of 2D vs. 3D Impactor Positioning Methods

Compensating tendencies in penalty kick decisions of referees in professional football: Evidence from the German Bundesliga

Lesson 20: Estimating a Population Proportion

INTRODUCTION TO PATTERN RECOGNITION

Better Search Improved Uninformed Search CIS 32

WHEN TO RUSH A BEHIND IN AUSTRALIAN RULES FOOTBALL: A DYNAMIC PROGRAMMING APPROACH

D-Case Modeling Guide for Target System

Chance and Uncertainty: Probability Theory

Introduction Politeness Theory Game Theory Trust and Modals Conclusion. About Face! Jason Quinley, Christopher Ahern.

A Chiller Control Algorithm for Multiple Variablespeed Centrifugal Compressors

Eg.1: If all the words are of different groups, then they will be shown by the diagram as given below. Dog, Cow, Horse

Building an NFL performance metric


Module 2. Topic: Mathematical Representations. Mathematical Elements. Pedagogical Elements. Activities Mathematics Teaching Institute

Lab 1c Isentropic Blow-down Process and Discharge Coefficient

Fishing Capacity and Efficient Fleet Configuration for the Tuna Purse Seine Fishery in the Eastern Pacific Ocean: An Economic Approach

Roberto Casati Institut Jean Nicod, Centre National de la Recherche Scientifique, Paris

The Use of BS 5400: Part 10: Code of Practice for Fatigue

Strategies used in the SPEAK test: a discourse analysis

News English.com Ready-to-use ESL / EFL Lessons

A living laboratory on the cutting edge of Enterprise Scrum

Tru Flight TRUFLIGHT INSTALLATION GUIDE TRUGOLF.COM

The Safety Case. Structure of Safety Cases Safety Argument Notation

B.U.G. Newsletter. Full Steam Ahead! September Dr. Brown

AGA Swiss McMahon Pairing Protocol Standards

GCSE (9-1) Arabic. Specification

Lesson 22: Average Rate of Change

Why We Should Use the Bullpen Differently

Figure 1. Winning percentage when leading by indicated margin after each inning,

The impacts of football point systems on the competitive balance: evidence from some European footbal leagues

IGEM/TD/2 Edition 2 with amendments July 2015 Communication 1779 Assessing the risks from high pressure Natural Gas pipelines

Beyond the game: Women s football as a proxy for gender equality

Introduction to Pattern Recognition

Offshore platforms survivability to underwater explosions: part I

PROFIT RECALL 2 USER GUIDE

Lesson Objectives. Core Content Objectives. Language Arts Objectives

Application of the probabilistic-fuzzy method of assessing the risk of a ship manoeuvre in a restricted area

RELATIVE PLACEMENT SCORING SYSTEM

$10 million for golf s 15-year-old Wie

living with the lab control of salinity 2012 David Hall

IACS History File + TB

DAY AT A PICNIC Hal Ames

Remote sensing standards: their current status and significance for offshore projects

BIKE & PEDESTRIAN IMPLEMENTATION GUIDE FOR LOCALITY INVOLVEMENT. November 2006

10/16/2013 TRENDS IN GRADUATION- SUCCESS RATES AND FEDERAL GRADUATION RATES AT NCAA DIVISION I INSTITUTIONS

Fire safety of staircases in multi-storey buildings The results of measurements in Buildings and Simulations

A Blind Man Catches a Bird

ELO MECHanICS EXPLaInED & FAQ

Transcription:

Teodora Mihoc @ LingedIn, May 4, 2018 Scope asymmetry, QUDs, and exhaustification 1 Data Two/every-not scope ambiguities (1) Every horse didn t jump over the fence. a. every > not (surface scope) None of the horses jumped over the fence. b. not > every (inverse scope) Not all of the horses jumped over the fence. (2) Two horses didn t jump over the fence. a. two > not (surface scope) There are two horses that didn t jump. b. not > two (inverse scope) It is not the case that there are two horses that jumped. Inverse scope endorsed for every-not but not for two-not Lidz (2003) found the following: A series of experiments by Musolino and In a context where only the inverse scope reading is true, adults typically endorse the ambiguous every-not utterance, (3), but not the ambiguous two-not utterance, (4). (3) Context: Two out of three horses jump over a fence. Every horse didn t jump over the fence. endorsed a. every > not (surface scope, F) b. not > every (inverse scope, T) (4) Context: One out of two horses jumps over a fence. Two horses didn t jump over the fence. not endorsed a. two > not (surface scope, F) Quantifying scope endorsement for ambiguous two-not utterance In contexts where both surface and inverse scope were true: - Strong surface scope bias (75% surface, 7.5% inverse, 17.5% unclear) for the ambiguous two-not utterance. (5) Context: Cookie Monster ate 1/3 pizza slices. Cookie Monster didn t eat two pizza slices. 75% surface, 7.5% inverse 1 / 7

a. two > not (surface scope, T) In contexts where only the surface scope was true: - 100% endorsement for the ambiguous two-not utterance. (6) Context: 2/4 horses jump over a fence. Two horses didn t jump over the fence. endorsement 100% a. two > not (surface scope, T) b. not > two (inverse scope, F) In contexts where only the inverse scope was true, without explicit linguistic contrast: - 27.5% endorsement for the ambiguous two-not utterance. (7) Context: 1/2 horses jump over a fence. Two horses didn t jump over the fence. endorsement 27.5% a. two > not (surface scope, F) In contexts where only the inverse scope was true, with explicit linguistic contrast: - 92.5% endorsement for the ambiguous two-not utterance. (8) Context: 1/2 horses jump over a fence. Two horses jumped over the rock but two horses didn t jump over the fence. endorsement 92.5% a. two > not (surface scope, F) Three puzzles P1 Two-not true surface vs. two-not true inverse: Why does the ambiguous two-not utterance get endorsement at ceiling (100%) in the (only surface true) 2/4 context but low endorsement (27.5%) in the (only inverse true) 1/2 context? P2 Two-not true inverse without contrast vs. two-not true inverse with contrast: Why does explicit contrast cause endorsement to increase so dramatically (from 27.5% to 92.5%) in the (only inverse true) 1/2 context? P3 (Two-not surface vs. inverse) vs. (every-not surface vs. inverse): In default contexts, why is there surface scope bias for the two-not utterance but not the every-not utterance? 2 / 7

2 Proposal 2.1 The meaning of numerals / ambiguity based on exhaustification Horn (1972): a bare numeral such as two comes with a basic at least meaning and then derives its exactly meaning via implicature: (9) Two horses jumped. At least two horses jumped. At least three horses jumped. Exactly two horses jumped. (entailment) (implicature) Chierchia et al. (2012): that this whole process unfolds via a silent grammatical exhaustivity operator O (named so because of its affinity to a silent only): (10) O(Two ( 2) horses jumped) = Exactly two horses jumped. The addition of this new operator means that our ambiguous two-not utterances are ambiguous not only with respect to the relative scope of the numeral/negation, but also with respect to whether O is or isn t present. For our purposes this translates into whether the numeral is interpreted as at least or exactly. 2.2 The generation of QUDs (see tables on last page) I propose that utterances come with certain implicit QUDs for each possible parse. For scopally unambiguous utterances there can still be multiple parses based on possibilities for exhaustification. (11) Two horses jumped over the fence. a. At least two horses jumped over the fence. QUD: Did at least two? b. Exactly two horses jumped over the fence. QUD: Did exactly two? (12) Every horse jumped over the fence. QUD: Did every horse jump? For scopally ambiguous utterances parses come from possibilities for exhaustification plus possible scope configurations. (13) Two horses didn t jump over the fence. a. two > not (i) At least two horses didn t jump over the fence. QUD: Did at least two not? (ii) Exactly two horses didn t jump over the fence. QUD: Did exactly two not? b. not > two (i) It is not the case that at least two did. QUD: Did at least two? (ii) It is not the case that exactly two did. QUD: Did exactly two? (14) Every horse didn t jump over the fence. a. every > not QUD: Did all not? b. not > every (i) Not every horse jumped over the fence. QUD: Did all? 3 / 7

(ii) Not every horse jumped, but some did. QUD: Did some but not all? 2.3 Endorsement model (part 1) A pragmatic speaker reasoning about a pragmatic listener will endorse an ambiguous utterance in proportion to how likely a pragmatic speaker is to extract from it the true state of the world (Savinelli et al. 2018). Pragmatic listener strategy #1: Upon hearing an ambiguous utterance, a pragmatic listener is most likely to extract from it the parse that is logically the strongest. Endorsement criterion #1: Endorse the ambiguous utterance if your intended parse is logically the strongest. 2.4 Solving puzzles 1 and 2 P1 Two-not true surface vs. two-not true inverse: Why does the ambiguous two-not utterance get endorsement at ceiling (100%) in the (only surface true) 2/4 context but low endorsement (27.5%) in the (only inverse true) 1/2 context? For an ambiguous two-not utterance the parse that is always logically the strongest is the Exactly two horses didn t jump parse. The fact that this parse is a surface scope parse explains both (1) the high endorsement of the ambiguous utterance in the 2/4 context where the surface scope reading was true and (2) the low endorsement of the ambiguous utterance in the 1/2 context where the surface scope reading was false. P2 Two-not true inverse without contrast vs. two-not true inverse with contrast: Why does explicit contrast cause endorsement to increase so dramatically (from 27.5% to 92.5%) in the (only inverse true) 1/2 context? The explicit contrast clause is a scopally unambiguous numeral clause with QUDs Did at least two horses jump? and Did exactly two horses jump?. Note that these are precisely the QUDs associated with the inverse scope reading of the ambiguous two-not utterance that follows. I argue thus that the effect of the explicit contrast clause is to prime the inverse scope QUDs, and implicitly bias towards an inverse scope parse. This explains the high endorsement for true inverse scope in this context. But the strongest parse is a surface parse not just in the two-not case but also in the every-not case. How can we explain puzzle 3 (scope endorsement asymmetry for two-not but not for every-not)? 3 Endorsement model (part 2) Pragmatic listener strategy #2: Upon hearing an ambiguous utterance, a pragmatic listener is also likely to extract from it the parse that can settle the largest number of QUDs raised by the ambiguous utterance. 4 / 7

Endorsement criterion #2: Endorse the ambiguous utterance if your intended parse is a parse that can settle the most of the QUDs raised by the ambiguous utterance. 3.1 Solving puzzle 3 P3 (Two-not surface vs. inverse) vs. (every-not surface vs. inverse): In default contexts, why is there surface scope bias for the two-not utterance but not the every-not utterance? For the two-not ambiguous utterance there is a unique parse that can settle all the QUDs associated with all the other parses. This is the Exactly two didn t parse which was also the logically strongest parse). No other parse can do that. For the every-not ambiguous utterance all the parses can do that to an equal degree (both the surface and the inverse scope parses can settle all the QUDs equally), hence the endorsement for both the surface and the inverse scope. 4 Making sense of additional puzzles Lots of experimental data (see Savinelli et al. 2018 and references therein) show that, while adults typically endorse the ambiguous every-not utterance in the 2/3 context with only inverse scope reading true, children often do not. We could capture this by saying that children don t care about Endorsement criterion #2 so they only endorse if the intended parse is logically the strongest. 5 Conclusion and outlook Things I ve tried to show: We can account for the a number of puzzles related to scope endorsement if we adopt a certain view of QUDs and assume that endorsement is affected by both the logical strength of the intended parse relative to the relevant set of QUDs (all or just a subset) and by how well the intended parse can settle the QUDs raised associated with the ambiguous utterance. Things I m wondering about: Endorsement criterion #1 corresponds to a fairly natural (and traditional) notion of maximizing strength. Is there a way to state Endorsement criterion #2 that would be more intuitive/natural? Adults typically endorse both the surface and the inverse scope readings of every-not ambiguous utterances. I ve argued that happens because, while Endorsement criterion #1 boosts the surface scope reading, Endorsement criterion #2 boosts both the surface scope reading and the inverse scope readings equally. This should results in good endorsement for both scope readings but with a preference for surface scope. Alternatively, it is possible that there is an additional factor at play, e.g., Endorsement criterion #3: Endorse unambiguous / most economical utterances. This would penalizes the ambiguous Every horse didn t jump on the every > not parse because it would be more economical to say No horse jumped over the fence. This should result in good endorsement for both scope readings and no preference for surface scope. It would be interesting to get some percentages for endorsement the way our starting data gave us for two-not. I don t know if they are already available. 5 / 7

References Chierchia, G., Fox, D., and Spector, B. (2012). Scalar implicature as a grammatical phenomenon. In Semantics: An international handbook of natural language meaning, volume 3, pages 2297 2331. Berlin & Boston: de Gruyter. Horn, L. R. (1972). On the semantic properties of logical operators in English. University Linguistics Club. Musolino, J. and Lidz, J. (2003). The scope of isomorphism: Turning adults into children. Language Acquisition, 11(4):277 291. Savinelli, K. J., Scontras, G., and Pearl, L. (2018). Exactly two things to learn from modeling scope ambiguity resolution: Developmental continuity and numeral semantics. Cognitive Modeling and Computational Linguistics. 6 / 7

Context: 1/2 horses jump over a fence. Utterance: Two horses didn t jump over the fence. Surface scope (two > not Inverse scope (not > two) Parse State Parse State P Q 2 2 or more did not QUD: Did 2 not? 0 ( P Q 2) not 2 or more did QUD: Did 2? 0 1 O( P Q 2) exactly 2 did not QUD: Did = 2 not? 0 O( P Q 2) not exactly 2 did QUD: Did = 2? *O ( P Q 2) exactly 1 did 0 1 Table 1: Parses, states, and QUDs for ambiguous two-not utterance in 1/2 context *1 Context: 2/4 horses jump over a fence. Utterance: Two horses didn t jump over the fence. Surface scope (two > not) Inverse scope (not > two) Parse State Parse State P Q 2 2 or more did not QUD: Did 2 not? 0 1 2 ( P Q 2) not 2 or more did QUD: Did 2? 0 1 O( P Q 2) exactly 2 did not QUD: Did = 2 not? 2 O( P Q 2) not exactly 2 did QUD: Did = 2? *O ( P Q 2) exactly 1 did 0 1 3 4 Table 2: Parses, states, and QUDs for ambiguous two-not utterance in 2/4 context *1 Context: 2/3 horses jump over a fence. Utterance: Every horse didn t jump over the fence. Surface scope (every > not) Inverse scope (not > every) Parse State Parse State 0 > 0 1 2 all did not QUD: Did all not? not all did QUD: Did all? O( ) 0 O( ) 0 1 2 (O vacuous) (O vacuous) O( ) some but not all QUD: Did some but not all? 1 2 Table 3: Parses, states, and QUDs for ambiguous every-not utterance in 2/3 context 7 / 7