Documentation

Linglib.Core.Agent.GaussianChoice

Gaussian Choice Bridge @cite{luce-1959} #

Connects the two Gaussian choice models formalized independently in SignalDetection.lean (§2.E) and Thurstone.lean (§2.D):

Luce (§2.E, p. 60) notes that SDT is "essentially Thurstone's discriminal process theory applied to the two-alternative detection context." This file makes that connection precise via four results:

  1. SDT as Thurstone (2AFC): The two-alternative forced choice (2AFC) correct-response probability Φ(d'/√2) equals Thurstone's choiceProb for signal vs noise with unit per-stimulus variance.
  2. Yes/No SDT as Thurstone: The yes/no hit rate Φ(d'/2 - c) equals Thurstone's choiceProb with σ = 1/√2.
  3. Logistic constant unification: SDT's logisticApproxConst = π/√3 equals Thurstone's thurstoneLuceK(1/√2).
  4. Shared softmax embedding: Both approximate the same RationalAction under the logistic-normal approximation.

Two-Alternative Forced Choice (2AFC) as Thurstone #

noncomputable def Core.SDTModel.twoAFC (m : SDTModel) :

Two-alternative forced choice (2AFC) correct-response probability.

In 2AFC, the observer sees two intervals — one containing signal+noise (from N(d'/2, 1)) and one containing noise only (from N(-d'/2, 1)) — and identifies which had the signal. The correct response probability is:

P(correct) = P(X_signal > X_noise) = Φ(d' / √2)

since X_signal - X_noise N(d', 2), so (X_signal - X_noise)/√2 N(d'/√2, 1).

Equations
Instances For
    theorem Core.SDTModel.twoAFC_ge_half (m : SDTModel) (h_nonneg : 0 m.dPrime) :
    1 / 2 m.twoAFC

    2AFC probability is at least 1/2 when d' ≥ 0: the observer performs at or above chance. The 0 ≤ m.dPrime hypothesis is explicit (not a structure invariant); see the docstring on SDTModel.

    noncomputable def Core.SDTModel.asThurstone2AFC (m : SDTModel) :

    Construct a Thurstone Case V model for the 2AFC detection task.

    Signal and noise are treated as two stimuli with scale values d'/2 and -d'/2, both with unit discriminal dispersion (σ = 1). Fin 2: 0 = signal, 1 = noise.

    Equations
    Instances For

      SDT 2AFC = Thurstone Case V: the 2AFC correct-response probability equals the Thurstone choice probability for signal vs noise.

      Both reduce to Φ(d' / √2):

      • 2AFC: Φ(d' / √2) (by definition)
      • Thurstone: Φ((d'/2 - (-d'/2)) / (1 · √2)) = Φ(d' / √2)

      Yes/No SDT as Thurstone #

      noncomputable def Core.SDTModel.asThurstoneYesNo (m : SDTModel) :

      Construct a Thurstone Case V model for the yes/no SDT task.

      The yes/no task (observe, then decide signal vs noise) is equivalent to a Thurstone model with σ = 1/√2 (so that σ√2 = 1), where:

      • scale(signal) = d'/2 - c (effective signal advantage)
      • scale(noise) = 0 Fin 2: 0 = signal, 1 = noise.
      Equations
      Instances For

        Yes/No SDT = Thurstone: the hit rate equals the Thurstone choice probability for the yes/no model.

        SDT hit rate: 1 - Φ(c - d'/2) = Φ(d'/2 - c) (by CDF symmetry). Thurstone with σ = 1/√2: Φ((d'/2 - c) / ((1/√2) · √2)) = Φ(d'/2 - c).

        The key identity is (1/√2) · √2 = 1, so the Thurstone denominator is 1 and the two expressions coincide.

        Logistic Approximation Constants #

        The SDT logistic approximation constant π/√3 equals Thurstone's thurstoneLuceK when σ = 1/√2.

        This is because: thurstoneLuceK(1/√2) = π / ((1/√2) · √6) = π · √2 / √6 = π / √3

        The identity √2 / √6 = 1/√3 follows from √2 · √3 = √6.

        Significance: the logistic approximation that connects SDT to the Luce model (§2.E) is exactly the same as the approximation that connects Thurstone to the Luce model (§2.D), when we use the yes/no SDT parameterization (σ = 1/√2).

        Shared Softmax Embedding #

        theorem Core.SDTModel.twoAFC_mono (m₁ m₂ : SDTModel) (h : m₁.dPrime < m₂.dPrime) :
        m₁.twoAFC < m₂.twoAFC

        2AFC models with different d' can be compared via the Thurstone ordering.

        Since twoAFC = Thurstone.choiceProb, and Thurstone satisfies strong stochastic transitivity, higher d' implies higher 2AFC P(correct).

        Proof: d₁' > d₂' implies scale(signal) - scale(noise) = d' is larger, so the Thurstone argument d'/√2 is larger, and Φ is strictly monotone.

        Connection to AUC: under the equal-variance Gaussian SDT model, the area under the ROC curve equals Φ(d'/√2) (@cite{green-swets-1966}; @cite{macmillan-creelman-2005}, ch. 3). This theorem therefore proves that AUC is strictly monotone in d'. The AUC integral identity itself — ∫₀¹ rocCurve d' f df = Φ(d'/√2) — is correct but unproved here; integrating rocCurve requires additional measure-theoretic infrastructure not currently developed.

        Random Utility Model Unification #

        RUM Unification #

        Both Thurstone and Luce choice models are Random Utility Models (RUMs) — they differ only in the noise distribution:

        ModelNoise DistributionChoice ProbabilityReference
        Thurstone VGaussian(0, σ²)Φ((u_a-u_b)/(σ√2))Thurstone.lean
        Gumbel-LuceGumbel(0, β)logistic((u_a-u_b)/β)GumbelLuce.lean

        The Gumbel-Luce model gives exactly the softmax (Luce) choice rule (McFadden's theorem, mcfaddenIntegral_eq_softmax). The Thurstone model gives the normal CDF. These agree up to the Gaussian-logistic approximation Φ(y·√3/π) ≈ logistic(y) (max error ~0.023, variance matching; see thurstone_luce_identity).

        The constant k = π/(σ√6) that appears in the Thurstone-Luce approximation (thurstoneLuceK) is the scale matching between Gaussian and Gumbel noise: it equates the variances σ² · 2 (Gaussian difference) and β² · π²/3 (logistic/Gumbel difference).

        theorem Core.gumbelRUM_binary_eq_logistic (d' β : ) ( : 0 < β) :
        mcfaddenIntegral (fun (i : Fin 2) => if i = 0 then d' / 2 else -(d' / 2)) β 0 = logistic (d' / β)

        Gumbel-Luce binary = logistic (exact): A Gumbel RUM with utilities [d'/2, -d'/2] and scale β gives choice probability logistic(d'/β).

        Compare with Thurstone Case V (hitRate_eq_thurstone), which gives Φ(d'/(σ√2)) — the same functional form but with the normal CDF instead of logistic.

        The two models are both RUMs; they agree when Φ ≈ logistic, i.e., when the variance-matched scale β = σ√6/π (see thurstoneLuceK).

        Stevens → Thurstone → SDT chain #

        The two Core/Agent/ psychophysics primitives — StevensScale (Stevens' power law ψ(s) = k · sⁿ, the deterministic intensity-to-percept mapping) and SDTModel (signal detection, the noisy discrimination operator) — sit in different regimes:

        This section composes the three: Stevens scale + Gaussian noise = Thurstone discriminal process; Thurstone for binary detection = SDT. Concretely, an observer with Stevens-scaled perception of intensity and Gaussian noise of SD σ discriminating two stimuli s_signal vs s_noise is an SDT observer with d' = (ψ(s_signal) - ψ(s_noise)) / σ.

        This is the standard psychophysics chain; cf. @cite{macmillan-creelman-2005} ch. 1 for the d′-vs-Stevens-Δψ relationship.

        noncomputable def Core.StevensScale.toThurstone (sc : StevensScale) (sigma : ) (h_pos : 0 < sigma) :

        Stevens → Thurstone constructor: Stevens' power-law perception with Gaussian discriminal dispersion σ is a Thurstone Case V model whose scale value at stimulus s is ψ(s) = k · sⁿ.

        Equations
        • sc.toThurstone sigma h_pos = { scale := fun (s : ) => sc.psi s, sigma := sigma, sigma_pos := h_pos }
        Instances For
          @[simp]
          theorem Core.StevensScale.toThurstone_choiceProb (sc : StevensScale) (sigma : ) (h_pos : 0 < sigma) (s₁ s₂ : ) :
          (sc.toThurstone sigma h_pos).choiceProb s₁ s₂ = normalCDF ((sc.psi s₁ - sc.psi s₂) / (sigma * 2))

          The choice probability under Stevens-derived Thurstone is Φ((ψ(s₁) - ψ(s₂)) / (σ · √2)) — the standard Thurstone Case V formula applied to the Stevens-transformed stimuli.

          noncomputable def Core.StevensScale.dPrime (sc : StevensScale) (sigma s_signal s_noise : ) :

          Stevens-derived d′: the SDT sensitivity for discriminating two stimuli s_signal vs s_noise under Stevens scaling and Gaussian noise of SD σ. Equals (ψ(s_signal) - ψ(s_noise)) / σ.

          This is the standard psychophysics formula: noise-normalized difference of perceived intensities, exactly as d' is defined in SDT (mean difference in σ units).

          Equations
          • sc.dPrime sigma s_signal s_noise = (sc.psi s_signal - sc.psi s_noise) / sigma
          Instances For
            noncomputable def Core.StevensScale.toSDT (sc : StevensScale) (sigma s_signal s_noise : ) :

            Stevens → SDT 2AFC constructor: Stevens-scaled perception of two stimuli s_signal, s_noise with Gaussian noise σ produces a zero-criterion (unbiased) SDT observer with d' from StevensScale.dPrime.

            The observer has zero criterion bias because 2AFC has no criterion parameter — both alternatives are presented and the observer picks the one with the larger discriminal sample.

            Equations
            • sc.toSDT sigma s_signal s_noise = { dPrime := sc.dPrime sigma s_signal s_noise, criterion := 0 }
            Instances For
              theorem Core.StevensScale.toSDT_isUnbiased (sc : StevensScale) (sigma s_signal s_noise : ) :
              (sc.toSDT sigma s_signal s_noise).IsUnbiased

              The Stevens-derived SDT observer is unbiased (zero criterion).

              theorem Core.StevensScale.twoAFC_eq_thurstone (sc : StevensScale) (sigma : ) (h_pos : 0 < sigma) (s_signal s_noise : ) :
              (sc.toSDT sigma s_signal s_noise).twoAFC = (sc.toThurstone sigma h_pos).choiceProb s_signal s_noise

              Stevens-Thurstone-SDT chain coherence: the 2AFC P(correct) predicted by the Stevens-derived SDT observer equals the Thurstone choice probability obtained by composing Stevens scaling with Gaussian noise.

              Both reduce to Φ((ψ(s_signal) - ψ(s_noise)) / (σ · √2)). The Stevens side computes via SDTModel.twoAFC = Φ(d'/√2) with d' = Δψ/σ; the Thurstone side computes via (sc.toThurstone σ).choiceProb. The two paths agree, validating the substrate composition.

              theorem Core.StevensScale.dPrime_nonneg (sc : StevensScale) (sigma : ) (h_pos : 0 < sigma) {s_signal s_noise : } (h_noise : 0 < s_noise) (h_le : s_noise s_signal) :
              0 sc.dPrime sigma s_signal s_noise

              Stevens-derived d′ is non-negative when s_signal ≥ s_noise and σ > 0, using psi's monotonicity (Stevens' power law is monotone in stimulus intensity for positive stimuli with positive exponent).