RSA Embedded Scalar Implicatures: Simplified Model (For Analysis) #
@cite{bergen-levy-goodman-2016} @cite{geurts-2010} @cite{potts-etal-2016}
This file implements a simplified 2-lexicon model to analyze why minimal Lexical Uncertainty models fail to derive embedded implicature patterns.
Status #
The ℚ-based RSA evaluation infrastructure (RSA.Eval, boolToRat, LURSA) has been removed. Type definitions and the model limitation analysis are preserved. RSA computations need to be re-implemented using the new RSAConfig framework.
This File's Purpose #
Demonstrates that a minimal 2-lexicon, 3-world model gives inverted predictions, motivating the richer structure in the full model.
World states for embedded scalar scenarios use the canonical
SomeAllWorld from Phenomena.ScalarImplicatures.Basic:
.none (nobody solved any problems), .someNotAll (someone solved
some-but-not-all), .all (someone solved all).
Equations
Instances For
Utterances for DE context: "No one solved {some/all} problems"
We need scalar alternatives for RSA to reason about informativity.
- noSome : DEUtterance
- noAll : DEUtterance
- null : DEUtterance
Instances For
Equations
- Phenomena.ScalarImplicatures.Embedded.Simplified.instDecidableEqDEUtterance x✝ y✝ = if h : x✝.ctorIdx = y✝.ctorIdx then isTrue ⋯ else isFalse ⋯
Equations
- One or more equations did not get rendered due to their size.
Instances For
The Lexical Uncertainty Model #
Each lexicon L assigns meanings to "some":
- L_base: "some" = at-least-one (literal)
- L_refined: "some" = some-but-not-all (Neo-Gricean strengthened)
The listener reasons over which lexicon the speaker is using.
Base lexicon meaning: "some" = at-least-one
"No one solved some problems" under L_base:
- True only when nobody solved any problems
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexBaseMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.DEUtterance.noSome Phenomena.ScalarImplicatures.SomeAllWorld.none = true
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexBaseMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.DEUtterance.noSome Phenomena.ScalarImplicatures.SomeAllWorld.all = false
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexBaseMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.DEUtterance.noAll Phenomena.ScalarImplicatures.SomeAllWorld.none = true
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexBaseMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.DEUtterance.noAll Phenomena.ScalarImplicatures.SomeAllWorld.all = false
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexBaseMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.DEUtterance.null x✝ = true
Instances For
Refined lexicon meaning: "some" = some-but-not-all
"No one solved some problems" under L_refined:
- True when nobody solved "some-but-not-all"
- This is TRUE when someone solved ALL (they didn't solve "some-but-not-all")!
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexRefinedMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.DEUtterance.noSome Phenomena.ScalarImplicatures.SomeAllWorld.none = true
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexRefinedMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.DEUtterance.noSome Phenomena.ScalarImplicatures.SomeAllWorld.all = true
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexRefinedMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.DEUtterance.noAll Phenomena.ScalarImplicatures.SomeAllWorld.none = true
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexRefinedMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.DEUtterance.noAll Phenomena.ScalarImplicatures.SomeAllWorld.all = false
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexRefinedMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.DEUtterance.null x✝ = true
Instances For
Equations
- Phenomena.ScalarImplicatures.Embedded.Simplified.instDecidableEqUEUtterance x✝ y✝ = if h : x✝.ctorIdx = y✝.ctorIdx then isTrue ⋯ else isFalse ⋯
Equations
- One or more equations did not get rendered due to their size.
Instances For
Base lexicon meaning for UE: "some" = at-least-one
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexBaseUEMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.UEUtterance.someSome Phenomena.ScalarImplicatures.SomeAllWorld.all = true
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexBaseUEMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.UEUtterance.someAll Phenomena.ScalarImplicatures.SomeAllWorld.none = false
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexBaseUEMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.UEUtterance.someAll Phenomena.ScalarImplicatures.SomeAllWorld.all = true
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexBaseUEMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.UEUtterance.null x✝ = true
Instances For
Refined lexicon meaning for UE: "some" = some-but-not-all
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexRefinedUEMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.UEUtterance.someAll Phenomena.ScalarImplicatures.SomeAllWorld.all = true
- Phenomena.ScalarImplicatures.Embedded.Simplified.lexRefinedUEMeaning Phenomena.ScalarImplicatures.Embedded.Simplified.UEUtterance.null x✝ = true
Instances For
Analysis of Results #
With α = 1 and uniform priors, the simplified 2-lexicon model gives INVERTED predictions compared to the empirical pattern. This motivates the need for richer model structure.
DE Context ("No one solved some"):
- The simple model predicts L_refined (local) wins -- WRONG
UE Context ("Someone solved some"):
- The simple model predicts L_base (global) wins -- WRONG
Why This Happens #
The key asymmetry is world coverage:
In DE:
- L_base: noSome true in {none} -- 1 world
- L_refined: noSome true in {none, someAll} -- 2 worlds
L_refined makes the utterance true in MORE worlds, so even though L_base is more informative, L_refined gets extra probability mass.
What Potts et al. Actually Does #
The paper succeeds because of richer model structure:
Multiple refinable items: Not just "some", but also proper names, predicates like "scored" vs "aced" (equation 14)
Richer world space: 3 players × 3 outcomes = 10 equivalence classes
Message alternatives: Full cross-product of quantifiers and predicates
Low λ = 0.1: Speaker nearly uniform, so implicatures emerge from lexicon structure, not informativity