Bayesian Observation Models and Posterior Update #
The ergonomic ℝ-typed interface for finite Bayesian inference with
explicit experiment indexing. Provides the same algorithm as mathlib's
canonical PMF.posterior (Core/Probability/Posterior.lean):
P(w|o,e) = prior(w) · P(o|w,e) / Σ_w' prior(w') · P(o|w',e).
Relationship to PMF.posterior #
PMF.posterior is the canonical mathlib probability-theoretic primitive
(κ : α → PMF β, μ : PMF α, observation b, returns PMF α). This
module's posterior is mathematically the same Bayes' rule formula but
with three ergonomic differences:
ℝinstead ofℝ≥0∞— keeps arithmetic in the reals so consumers can use signed value functions ((W → ℝ) → ℝfor utility / entropy theorems) withoutENNRealcoercions.- Experiment indexing
E—ObservationModel W E Ocarries an explicit experiment-index parameter that PMF kernels lack. For each fixede : E, anObservationModelIS a kernelW → PMF O(seeObservationModel.toPMFKernel). - Permissive prior — accepts any
prior : W → ℝrather than requiring aPMF W(the formula gives 0 when the marginal is 0).
Use PMF.posterior for general probability work; use this module for
finite Bayesian inference where experiment indexing or (W → ℝ)-typed
value functions are needed (e.g., ExperimentDesign.eig).
Key Results #
posterior_nonneg: posteriors inherit non-negativity from priorsposterior_sum_one: posteriors sum to 1 (when marginal is nonzero)posterior_marginalizes_to_prior: the law of total probability — marginalized posteriors reconstruct the prior:∑ o, P(o|e) · P(w|o,e) = prior(w)ObservationModel.toPMFKernel: PMF-typed view of the likelihood at fixed experiment, enabling interop withPMF.posterior.
Usage #
This module is imported by ExperimentDesign.lean (for EIG computation)
and can be imported independently wherever Bayesian updating is needed.
An observation model: how experiments generate observations in different worlds. P(o|w,e) — the probability of observing o when the true world is w and experiment e is conducted.
- likelihood : W → E → O → ℝ
Likelihood: P(o|w,e)
Likelihood is non-negative
Likelihood sums to 1 for each (w,e)
Instances For
Marginal probability of observation o under experiment e: P(o|e) = Σ_w prior(w) · P(o|w,e)
Equations
- Core.marginalObs om prior e o = ∑ w : W, prior w * om.likelihood w e o
Instances For
Bayesian posterior after observing o under experiment e: P(w|o,e) = prior(w) · P(o|w,e) / P(o|e)
Equations
- Core.posterior om prior e o w = if Core.marginalObs om prior e o = 0 then 0 else prior w * om.likelihood w e o / Core.marginalObs om prior e o
Instances For
Marginal observation probability is non-negative when prior is.
Posterior is non-negative when prior is.
Posterior sums to 1 when marginal is nonzero and prior is non-negative.
The law of total probability for posteriors: marginalized posteriors reconstruct the prior.
Σ_o P(o|e) · P(w|o,e) = prior(w)
This is the key identity underlying EIG non-negativity.
A deterministic observation model from a classifier: each world produces exactly the observation corresponding to its classification.
This is the partition-cell case: classify assigns each world to a cell,
and the observation reveals which cell the world belongs to.
Equations
- Core.deterministicObs classify = { likelihood := fun (w : W) (x : Unit) (o : O) => if classify w = o then 1 else 0, likelihood_nonneg := ⋯, likelihood_sum := ⋯ }
Instances For
PMF interop #
Convert an ObservationModel at a fixed experiment to a mathlib
PMF-typed kernel W → PMF O. Bridges to PMF.posterior for any
downstream use that requires the canonical mathlib type.
The PMF view of an ObservationModel at a fixed experiment e.
For each world w, returns the PMF O whose mass at o is
om.likelihood w e o. Bridges this module's ℝ-typed Bayesian
machinery to mathlib's canonical PMF.posterior.
Equations
- om.toPMFKernel e w = PMF.ofFintype (fun (o : O) => ENNReal.ofReal (om.likelihood w e o)) ⋯