Documentation

Linglib.Phenomena.ScalarImplicatures.Embedded.Basic

RSA Embedded Scalar Implicatures: Simplified Model (For Analysis) #

@cite{bergen-levy-goodman-2016} @cite{geurts-2010} @cite{potts-etal-2016}

This file implements a simplified 2-lexicon model to analyze why minimal Lexical Uncertainty models fail to derive embedded implicature patterns.

Status #

The ℚ-based RSA evaluation infrastructure (RSA.Eval, boolToRat, LURSA) has been removed. Type definitions and the model limitation analysis are preserved. RSA computations need to be re-implemented using the new RSAConfig framework.

This File's Purpose #

Demonstrates that a minimal 2-lexicon, 3-world model gives inverted predictions, motivating the richer structure in the full model.

@[reducible, inline]

World states for embedded scalar scenarios use the canonical SomeAllWorld from Phenomena.ScalarImplicatures.Basic: .none (nobody solved any problems), .someNotAll (someone solved some-but-not-all), .all (someone solved all).

Equations
Instances For

    Utterances for DE context: "No one solved {some/all} problems"

    We need scalar alternatives for RSA to reason about informativity.

    Instances For
      Equations
      • One or more equations did not get rendered due to their size.
      Instances For

        The Lexical Uncertainty Model #

        Each lexicon L assigns meanings to "some":

        The listener reasons over which lexicon the speaker is using.

        Utterances for UE context

        Instances For
          Equations
          • One or more equations did not get rendered due to their size.
          Instances For

            Analysis of Results #

            With α = 1 and uniform priors, the simplified 2-lexicon model gives INVERTED predictions compared to the empirical pattern. This motivates the need for richer model structure.

            DE Context ("No one solved some"):

            UE Context ("Someone solved some"):

            Why This Happens #

            The key asymmetry is world coverage:

            In DE:

            L_refined makes the utterance true in MORE worlds, so even though L_base is more informative, L_refined gets extra probability mass.

            What Potts et al. Actually Does #

            The paper succeeds because of richer model structure:

            1. Multiple refinable items: Not just "some", but also proper names, predicates like "scored" vs "aced" (equation 14)

            2. Richer world space: 3 players × 3 outcomes = 10 equivalence classes

            3. Message alternatives: Full cross-product of quantifiers and predicates

            4. Low λ = 0.1: Speaker nearly uniform, so implicatures emerge from lexicon structure, not informativity