Documentation

Linglib.Phenomena.ScalarImplicatures.Studies.PottsEtAl2016

@cite{potts-etal-2016}: Embedded Implicatures as Pragmatic Inferences #

@cite{potts-etal-2016} @cite{chemla-spector-2011}

"Embedded Implicatures as Pragmatic Inferences under Compositional Lexical Uncertainty." Journal of Semantics 33(4): 755–802.

Empirical anchor: @cite{chemla-spector-2011} #

The 3-players × 3-outcomes architecture is structurally the same as CS11's every/exactly one/no × some/all design (CS11 uses 6 letters × 3 cell-states for Exp 1, 3 letters for Exp 2). The LU model's predictions — DE prefers weak lexicon (NNN reading), UE prefers strong (SSS embedded SI) — match CS11's qualitative findings: STRONG > WEAK in universal contexts (Exp 1) and LOCAL > FALSE in non-monotonic (Exp 2). The LU model is silent on the attitude-verb gradient that CS11 doesn't test (think > want > all > must, from @cite{geurts-pouscoulous-2009}).

The Puzzle #

Scalar implicatures interact asymmetrically with logical operators:

The Model: Compositional Lexical Uncertainty #

The key innovation is lexical uncertainty: L1 marginalizes over possible lexica (refinements of "some") rather than using a fixed literal semantics. Two lexica:

This uses the standard RSAConfig latent variable mechanism: Latent := Lexicon. No special infrastructure needed — the same mechanism handles observations (@cite{goodman-stuhlmuller-2013}), scope readings (@cite{scontras-pearl-2021}), and QUDs (@cite{kao-etal-2014-hyperbole}).

Architecture #

The experiment (Section 6) uses 3 players, each with outcome N (nothing) / S (scored but not aced) / A (aced). The 10 equivalence classes are the multisets of 3 outcomes. Utterances are PlayerQ × ShotQ (outer × inner quantifier): "every/exactly one/no player hit all/none/some of his shots."

Predictions #

The asymmetry arises from monotonicity:

World state as equivalence class over 3 players' outcomes. Each player's outcome: N (nothing), S (scored but not aced), A (aced). 10 classes = multisets of size 3 from {N, S, A}.

Instances For
    @[implicit_reducible]
    Equations
    def PottsEtAl2016.instReprWorld.repr :
    WorldStd.Format
    Equations
    Instances For
      @[implicit_reducible]
      Equations

      Inner quantifier: over a player's shots.

      Instances For
        @[implicit_reducible]
        Equations
        @[implicit_reducible]
        Equations
        def PottsEtAl2016.instReprShotQ.repr :
        ShotQStd.Format
        Equations
        Instances For

          Outer quantifier: over players.

          Instances For
            @[implicit_reducible]
            Equations
            def PottsEtAl2016.instReprPlayerQ.repr :
            PlayerQStd.Format
            Equations
            Instances For

              Utterance: outer quantifier × inner quantifier, plus null.

              Instances For
                Equations
                • One or more equations did not get rendered due to their size.
                Instances For

                  Lexicon: how "some" is interpreted.

                  Instances For
                    @[implicit_reducible]
                    Equations
                    def PottsEtAl2016.instReprLexicon.repr :
                    LexiconStd.Format
                    Equations
                    • One or more equations did not get rendered due to their size.
                    Instances For
                      def PottsEtAl2016.predCount (sq : ShotQ) (lex : Lexicon) (w : World) :

                      Count of players satisfying the inner predicate, under a given lexicon.

                      • all: number who aced
                      • none_: number who did nothing
                      • some_: depends on lexicon:
                        • weak: number who scored (≥ 1 shot)
                        • strong: number who scored but did not ace
                      Equations
                      Instances For

                        @cite{potts-etal-2016} lexical uncertainty model.

                        Latent variable = Lexicon (weak vs strong "some"). L0: literal listener under lexicon l. S1: belief-based scoring, rpow(L0(w|u), α). L1: marginalizes over lexica with uniform prior.

                        Uniform priors, α = 1, no utterance cost. Qualitative predictions (DE blocking, UE enrichment) hold across a range of rationality parameters.

                        Equations
                        • One or more equations did not get rendered due to their size.
                        Instances For

                          Lexica agree on all non-"some" utterances. The lexicon only affects the interpretation of "some"; "all" and "none" are unambiguous.

                          DE context: strong "some" widens the set of true worlds relative to weak. Under "no player hit some of his shots":

                          • Weak "some": only NNN satisfies (1 world)
                          • Strong "some": NNN, NNA, NAA, AAA satisfy (4 worlds) Widening makes the utterance less informative under the strong lexicon.

                          UE context: strong "some" narrows the set of true worlds relative to weak. Under "every player hit some of his shots":

                          • Weak "some": SSS, SSA, SAA, AAA satisfy (4 worlds)
                          • Strong "some": only SSS satisfies (1 world) Narrowing makes the utterance more informative under the strong lexicon.

                          "No player hit some of his shots" → NNN preferred.

                          Under the weak lexicon, only NNN makes the utterance true (1 world, maximally informative). Under the strong lexicon, NNN, NNA, NAA, and AAA all make it true (4 worlds, less informative). L1 marginalizes over lexica weighted by informativity, preferring the weak lexicon for this utterance. Result: NNN receives the highest posterior — the global reading.

                          "Every player hit some of his shots" → SSS preferred.

                          Under the strong lexicon, only SSS makes the utterance true (1 world, maximally informative). Under the weak lexicon, SSS, SSA, SAA, and AAA all make it true (4 worlds, less informative). L1 marginalizes and prefers the informative strong lexicon for this utterance. Result: SSS receives the highest posterior — the embedded implicature.

                          The 6 qualitative findings from the @cite{potts-etal-2016} LU model. 3 DE blocking predictions (global reading preferred under "no") + 3 UE enrichment predictions (enriched reading preferred under "every").

                          Instances For
                            @[implicit_reducible]
                            Equations
                            def PottsEtAl2016.instReprFinding.repr :
                            FindingStd.Format
                            Equations
                            • One or more equations did not get rendered due to their size.
                            Instances For

                              All findings.

                              Equations
                              • One or more equations did not get rendered due to their size.
                              Instances For

                                Map each empirical finding to the RSA model prediction that accounts for it.

                                Equations
                                • One or more equations did not get rendered due to their size.
                                Instances For

                                  The RSA model accounts for all 6 qualitative findings from @cite{potts-etal-2016}.

                                  The outer quantifiers "every" and "no" in the @cite{potts-etal-2016} model agree with the generic quantity domain semantics from Phenomena.ScalarImplicatures.QuantityDomain.meaning. This grounds the stipulated utteranceTruth in the shared quantifier infrastructure.

                                  See also: GoodmanStuhlmuller2013PMF's qMeaning definition (uses the same QuantityDomain.meaning-derived shape).

                                  The @cite{potts-etal-2016} predictions connect to two other parts of linglib:

                                  1. someAllBlocking (ScalarImplicatures.Basic): The empirical datum that "some" implicatures are present in UE and blocked in DE. The Potts model derives both sides: UE enrichment (§7) and DE blocking (§6).

                                  2. Geurts2010 (ScalarImplicatures.Studies.Geurts2010): Notes that the minimal LU model inverts the predictions, but "the full Potts et al. model derives the correct pattern." The theorems here are the formal backing.

                                  3. EmbeddedSIPrediction (LexicalUncertainty.Compositional): Tracks embedded SI predictions by context type. The Potts model demonstrates the negation case: local reading dispreferred in DE (global NNN preferred).

                                  The Potts model matches the someAllBlocking empirical pattern: UE enrichment present (implicatureInUE = true) and DE blocking present (implicatureInDE = false).