Bayesian Semantics: Graded Truth from Parameter Uncertainty #
This module explores approaches where graded truth values emerge from uncertainty over predicate parameters, rather than being stipulated directly.
Background #
Three approaches to probabilistic/graded semantics:
1. Stipulated Graded Values #
- Define
P : Entity → ℝ≥0∞directly - Compose using probabilistic operations (×, +−pq)
- Simple but ad hoc
2. Marginalize over Parameters #
- Define Boolean semantics
P_θ : Entity → Bool - Put a prior over θ
- Graded truth = E_θ[P_θ(x)]
- More principled but still separate from composition
3. Probability Monad (Grove's PDS) #
- Standard compositional semantics (CCG + λ-calculus)
- Probability as a monadic effect:
P α - Compiles to probabilistic programs (Stan)
- Most principled: separates semantics from inference
This Module #
We implement approach #2 as a stepping stone. It shows that graded values can emerge from Boolean semantics + uncertainty, connecting to Lassiter & Goodman's result: "threshold semantics + uncertainty = graded semantics".
Priors are mathlib PMF-valued (ℝ≥0∞), matching the rest of the
probability layer.
A threshold predicate: "x has property P iff measure(x) > θ"
This models gradable adjectives like "tall", "expensive", etc.
- measure : E → Θ
The measure function (e.g., height, price)
- thresholdPrior : PMF Θ
Prior over thresholds
Instances For
Convert to a parameterized predicate
Equations
- pred.toParamPred = { semantics := fun (θ : Θ) (x : E) => decide (pred.measure x > θ), prior := pred.thresholdPrior }
Instances For
Graded truth for threshold predicates
Equations
- pred.gradedTruth x = pred.toParamPred.gradedTruth x
Instances For
Graded truth equals probability that measure > threshold.
For "tall(John)": P(height(John) > θ) under the threshold prior.
A feature-based predicate where entities are characterized by features and the predicate is a linear classifier.
Entity satisfaction: bias + Σᵢ weight_i · feature_i(x) > 0
This models the Bernardy et al. approach where predicates are hyperplanes.
- features : E → Fin n → ℚ
Extract feature vector from entity
- params : Type u_2
Predicate parameters: bias and weights
- paramsFintype : Fintype self.params
Fintype instance for params
- paramsDecEq : DecidableEq self.params
- bias : self.params → ℚ
Bias for each parameter setting
- weights : self.params → Fin n → ℚ
Weights for each parameter setting
- prior : PMF self.params
Prior over parameters
Instances For
Boolean semantics for a specific parameter setting: the linear classifier
bias + Σᵢ weight_i · feature_i(x) > 0.
Equations
Instances For
Convert to parameterized predicate
Equations
- pred.toParamPred = { semantics := pred.satisfies, prior := pred.prior }
Instances For
Graded truth emerges from parameter uncertainty
Equations
- pred.gradedTruth x = pred.toParamPred.gradedTruth x
Instances For
Compositional Structure #
@cite{bernardy-blanck-chatzikyriakidis-lappin-2018}
Two Strategies #
Marginalize early:
- Convert to graded values immediately
- Compose using probabilistic operations (×, +−pq)
- Algebraic structure (De Morgan, etc.)
Marginalize late (this module):
- Keep Boolean semantics during composition
- Parameter uncertainty tracked but not resolved
- Graded values emerge only when needed
Probability monad (Grove's PDS):
- Standard compositional semantics
- Probability as monadic effect
- Compiles to probabilistic programs
Strategy #3 is most principled but requires more infrastructure. This module implements #2 as an intermediate step.