Documentation

Linglib.Theories.Semantics.Probabilistic.BayesianSemantics

Bayesian Semantics: Graded Truth from Parameter Uncertainty #

This module explores approaches where graded truth values emerge from uncertainty over predicate parameters, rather than being stipulated directly.

Background #

Three approaches to probabilistic/graded semantics:

1. Stipulated Graded Values #

2. Marginalize over Parameters #

3. Probability Monad (Grove's PDS) #

This Module #

We implement approach #2 as a stepping stone. It shows that graded values can emerge from Boolean semantics + uncertainty, connecting to Lassiter & Goodman's result: "threshold semantics + uncertainty = graded semantics".

Priors are mathlib PMF-valued (ℝ≥0∞), matching the rest of the probability layer.

structure Semantics.Montague.BayesianSemantics.ThresholdPred (E : Type u_1) (Θ : Type u_2) [Fintype Θ] [Preorder Θ] :
Type (max u_1 u_2)

A threshold predicate: "x has property P iff measure(x) > θ"

This models gradable adjectives like "tall", "expensive", etc.

  • measure : EΘ

    The measure function (e.g., height, price)

  • thresholdPrior : PMF Θ

    Prior over thresholds

Instances For
    def Semantics.Montague.BayesianSemantics.ThresholdPred.toParamPred {E : Type u_1} {Θ : Type u_2} [Fintype Θ] [Preorder Θ] [DecidableRel fun (x1 x2 : Θ) => x1 < x2] (pred : ThresholdPred E Θ) :

    Convert to a parameterized predicate

    Equations
    Instances For
      noncomputable def Semantics.Montague.BayesianSemantics.ThresholdPred.gradedTruth {E : Type u_1} {Θ : Type u_2} [Fintype Θ] [Preorder Θ] [DecidableRel fun (x1 x2 : Θ) => x1 < x2] (pred : ThresholdPred E Θ) (x : E) :
      ENNReal

      Graded truth for threshold predicates

      Equations
      Instances For
        theorem Semantics.Montague.BayesianSemantics.ThresholdPred.gradedTruth_eq_probOfSet {E : Type u_1} {Θ : Type u_2} [Fintype Θ] [Preorder Θ] [DecidableRel fun (x1 x2 : Θ) => x1 < x2] (pred : ThresholdPred E Θ) (x : E) :
        pred.gradedTruth x = pred.thresholdPrior.probOfSet {θ : Θ | pred.measure x > θ}

        Graded truth equals probability that measure > threshold.

        For "tall(John)": P(height(John) > θ) under the threshold prior.

        structure Semantics.Montague.BayesianSemantics.FeaturePred (E : Type u_1) (n : ) :
        Type (max u_1 (u_2 + 1))

        A feature-based predicate where entities are characterized by features and the predicate is a linear classifier.

        Entity satisfaction: bias + Σᵢ weight_i · feature_i(x) > 0

        This models the Bernardy et al. approach where predicates are hyperplanes.

        • features : EFin n

          Extract feature vector from entity

        • params : Type u_2

          Predicate parameters: bias and weights

        • paramsFintype : Fintype self.params

          Fintype instance for params

        • paramsDecEq : DecidableEq self.params
        • bias : self.params

          Bias for each parameter setting

        • weights : self.paramsFin n

          Weights for each parameter setting

        • prior : PMF self.params

          Prior over parameters

        Instances For
          def Semantics.Montague.BayesianSemantics.FeaturePred.satisfies {E : Type u_1} {n : } (pred : FeaturePred E n) (p : pred.params) (x : E) :
          Bool

          Boolean semantics for a specific parameter setting: the linear classifier bias + Σᵢ weight_i · feature_i(x) > 0.

          Equations
          Instances For

            Convert to parameterized predicate

            Equations
            Instances For
              noncomputable def Semantics.Montague.BayesianSemantics.FeaturePred.gradedTruth {E : Type u_1} {n : } (pred : FeaturePred E n) (x : E) :
              ENNReal

              Graded truth emerges from parameter uncertainty

              Equations
              Instances For

                Compositional Structure #

                @cite{bernardy-blanck-chatzikyriakidis-lappin-2018}

                Two Strategies #

                1. Marginalize early:

                  • Convert to graded values immediately
                  • Compose using probabilistic operations (×, +−pq)
                  • Algebraic structure (De Morgan, etc.)
                2. Marginalize late (this module):

                  • Keep Boolean semantics during composition
                  • Parameter uncertainty tracked but not resolved
                  • Graded values emerge only when needed
                3. Probability monad (Grove's PDS):

                  • Standard compositional semantics
                  • Probability as monadic effect
                  • Compiles to probabilistic programs

                Strategy #3 is most principled but requires more infrastructure. This module implements #2 as an intermediate step.