Product of Experts on PMF #
@cite{hinton-2002} @cite{erk-herbelot-2024}
Combine two PMFs over the same type by pointwise multiplication followed by renormalisation. Symmetric in the two factors. Used in product-of-experts neural-network models (@cite{hinton-2002}) and in distributional semantics when fusing concept-cue and context-cue distributions (@cite{erk-herbelot-2024} fn 10).
PoE is not a posterior — there is no observation, no kernel, no
direction. It is the symmetric pointwise product of two PMFs over a shared
type, which is why this construction lives in its own file rather than in
Posterior.lean.
The construction factors through PMF.reweight (defined in
Posterior.lean): productOfExperts p q = p.reweight q.
Main definitions #
PMF.productOfExperts p q h_pos—(p × q) / ZwhereZ = ∑' a, p a * q a.
Main theorems #
productOfExperts_apply— explicit formula.productOfExperts_comm— symmetry in the two factors.mem_support_productOfExperts_iff— support is the intersection of the factor supports (the disjoint-supports caveat of @cite{erk-herbelot-2024} fn 10).
Product of Experts: combine two PMFs over the same type by multiplying
mass at each point and renormalising. Symmetric in p, q. The crucial
precondition (paper @cite{erk-herbelot-2024} fn 10): at least one point
must have non-zero mass under both factors.
Equations
- p.productOfExperts q h_pos = p.reweight (fun (a : α) => q a) h_pos ⋯
Instances For
PoE is commutative in the two factors (modulo the positivity hypothesis, which is itself symmetric).
PoE support: points with non-zero mass under both factors. The formal content of @cite{erk-herbelot-2024} fn 10's caveat about disjoint supports.