Documentation

Linglib.Paradigms.VisualWorld

Visual-World Paradigm #

@cite{huettig-rommers-meyer-2011}

Shared vocabulary for the visual-world eye-tracking paradigm. A subject sees a display of objects and hears a sentence; their eye movements reveal incremental processing decisions before the referent is unambiguous from the speech signal.

Architectural role #

Paradigms/ is the contract layer between Theories/ and Phenomena/Studies/. Theories produce predictions in their native types; bridge theorems in Studies/ translate those predictions into paradigm-typed predictions and prove they satisfy the empirical patterns documented in the study file. The paradigm itself is theory-agnostic: it specifies what kind of input the experiment provides and what shape of output a theory must produce.

Anchoring on a methodological review #

This file's type structure follows the data-field ontology of @cite{huettig-rommers-meyer-2011}'s methodological review of the paradigm. Each paradigm-level type cites the section of the review it comes from, so the lineage stays auditable. New paradigm primitives should not be added without a corresponding review section motivating them — the discipline is to follow existing methodological consensus rather than invent categories.

Lens-shaped manipulation classes #

HasContrastCondition and HasTask are not projection-only — they also carry a set* lens with three lens laws (get_set, set_get, set_set). This is what makes them load-bearing: the paradigm-level predicates ContrastSpeedsResponse and ContrastReducesRoleLooks universally quantify over a base cell and apply the lens to swap the manipulated factor, so any study whose Cell type instances the class inherits the predicates without re-deriving them per design. A study whose Cell type cannot provide a lawful lens for a factor should not claim the typeclass for that factor.

HasDisplayKind is intentionally projection-only: display kind is typically a between-study constant rather than a within-study manipulation, so a lens would have no consumer.

Out of scope (per CLAUDE.md Processing scope) #

Display kind, per @cite{huettig-rommers-meyer-2011} §2.1.1.

Different display kinds tap different representations:

  • semiRealisticScene: line drawings of scenes; activates world knowledge about scenes/events. Used by Altmann & Kamide (1999).
  • objectArray: separate objects laid out on a workspace or screen. Minimises scene-level world knowledge. Used by Tanenhaus et al. (1995), Sedivy et al. (1999), and most adjective studies.
  • printedWord: written words instead of pictures. Taps orthographic processing. Introduced by Huettig & McQueen (2007).
Instances For
    @[implicit_reducible]
    Equations
    Equations
    • One or more equations did not get rendered due to their size.
    Instances For
      @[implicit_reducible]
      Equations
      • One or more equations did not get rendered due to their size.

      Role an object plays in the visual display, per @cite{huettig-rommers-meyer-2011} §2.1.2.

      The four-way split is the standard vocabulary for studies that cross a same-category-contrast manipulation (Sedivy 1999, Engelhardt 2006, Ronderos 2024). Studies without a contrast manipulation typically use only target and distractor. Cohort/phonological paradigms (Allopenna 1998, Huettig & Altmann 2005) use additional role distinctions (cohort competitor, rhyme competitor) that are not yet enumerated here — extend this type rather than encoding them in study-local enums.

      • target : ObjectRole

        Intended referent of the spoken expression.

      • contrastingObject : ObjectRole

        Same category as target, opposite pole on the relevant scale. Present only in contrast trials.

      • crossCategoryCompetitor : ObjectRole

        Different category from target, but further along the relevant scale. Always present.

      • cohortCompetitor : ObjectRole

        Phonological cohort competitor (shares onset with target word). Used in Allopenna et al. (1998), Huettig & Altmann (2005).

      • rhymeCompetitor : ObjectRole

        Rhyme competitor (shares offset/rhyme with target word). Used in Allopenna et al. (1998).

      • semanticCompetitor : ObjectRole

        Semantic competitor (semantically related to target, no phonological/visual overlap). Used in Huettig & Altmann (2005).

      • distractor : ObjectRole

        Unrelated object. Always present.

      Instances For
        @[implicit_reducible]
        Equations
        Equations
        • One or more equations did not get rendered due to their size.
        Instances For
          @[implicit_reducible]
          Equations
          • One or more equations did not get rendered due to their size.

          Whether the display contains a same-category contrasting object.

          The canonical between-condition manipulation in adjective-driven contrastive-inference paradigms.

          Instances For
            @[implicit_reducible]
            Equations
            Equations
            • One or more equations did not get rendered due to their size.
            Instances For
              @[implicit_reducible]
              Equations
              • One or more equations did not get rendered due to their size.

              Communicative task, per @cite{huettig-rommers-meyer-2011} §2.1.3.

              Task choice affects which competition effects appear: look-and-listen tasks reveal general language–vision interactions, while direct-action tasks may impose specific task demands.

              • directAction: "Pick up the candy" (Allopenna et al. 1998, Sedivy et al. 1999 Exp 2 instruction).
              • lookAndListen: "The boy will eat the cake" (Altmann & Kamide 1999, Huettig & Altmann 2005).
              • verification: "Is there a tall glass?" (Sedivy et al. 1999 Exp 3).
              • description: "Tell me what you see" (production studies; Griffin & Bock 2000, Meyer et al. 1998).
              Instances For
                @[implicit_reducible]
                Equations
                def Paradigms.VisualWorld.instReprTask.repr :
                TaskStd.Format
                Equations
                • One or more equations did not get rendered due to their size.
                Instances For
                  @[implicit_reducible]
                  Equations
                  • One or more equations did not get rendered due to their size.

                  These are observable shapes — the form data takes when the eye-tracker reports it (or when a theory's predictions have been projected through a linking hypothesis to that form). They are deliberately polymorphic over the codomain R: empirical data tables naturally live in (proportions, mean ms), while theory predictions ride along (RSA posteriors, surprisal in nats). The qualitative predicates in §5 only use < (and Sub for difference scores), so they apply uniformly across codomains.

                  Linking-hypothesis caveat (deferred refactor): a theory like incremental RSA produces a posterior Word → Referent → ℝ, not fixation proportions. Mapping a posterior to a LookProportion requires an explicit linking hypothesis (e.g. Allopenna-Magnuson- Tanenhaus 1998 Luce-choice over activations; or a Bayesian "looks proportional to L1 posterior" assumption). Today studies bridge that gap inline with a docstring naming the linking hypothesis. If/when a second linking hypothesis enters the codebase, extract them into a typed LinkingHypothesis module and make the bridge theorem statement mention the linking hypothesis by name.

                  @[reducible, inline]

                  Latency observable: per-cell response time (ms in empirical tables; an arbitrary R for theory predictions that ride along a different numeric type).

                  Equations
                  Instances For
                    @[reducible, inline]

                    Fixation-proportion observable: per-cell proportion of fixations on each ObjectRole. The role argument lets predicates make role- specific claims ("contrast reduces competitor looks") at the type level. Codomain R is polymorphic; see the §3 docstring.

                    Equations
                    Instances For

                      Studies' Cell types vary in what manipulations they cross (typicality, task, prime, frequency, …). The shared minimum is that every visual-world cell with a contrast manipulation has a contrast condition AND a way to swap that condition while holding the rest of the cell constant. The lens laws (get_set, set_get, set_set) make setContrast a proper lens; the paradigm-level predicates in §5 rely on them to express "swapping the contrast condition slows RT" as a statement that quantifies over the rest of the cell uniformly.

                      Cell has a contrast condition that can be swapped without touching other factors. Lens laws are stated as fields so that the paradigm predicates can rewrite with them when discharging concrete proofs.

                      Instances

                        Cell has a task that can be swapped without touching other factors. Same lens-law shape as HasContrastCondition.

                        Instances

                          Cell has a display kind. Projection-only: display kind is a between-study constant in essentially all visual-world studies, so no lens is required. Studies that do manipulate display kind within-subjects should add their own lens API rather than inflating this class.

                          Instances

                            These predicates state empirical patterns at the paradigm level — they are written in terms of setContrast (the lens), so any Cell type with a HasContrastCondition instance can claim them without re-deriving the universal-quantification machinery per study. The predicates are abstract; they express a shape ("swapping the contrast condition reduces RT") that empirical studies and theoretical models may or may not satisfy.

                            A latency observable satisfies the contrast-speeds-response pattern if swapping any cell's contrast condition from contrast to noContrast strictly increases the latency. The lens laws guarantee that "swapping" only changes the contrast factor.

                            Equations
                            • One or more equations did not get rendered due to their size.
                            Instances For

                              A look-proportion observable satisfies the contrast-reduces-role-looks pattern at role if swapping any cell's contrast condition from contrast to noContrast strictly increases the proportion of looks to role. The role argument lets a study scope the claim to a particular competitor type.

                              Equations
                              • One or more equations did not get rendered due to their size.
                              Instances For
                                @[reducible, inline]

                                Specialisation of ContrastReducesRoleLooks to the cross-category-competitor role — the canonical effect in same-category contrast paradigms (Sedivy 1999, etc.).

                                Equations
                                Instances For

                                  A look-proportion observable satisfies the contrast-speeds-target-looks pattern if swapping any cell's contrast condition from contrast to noContrast strictly decreases the proportion of looks to the target. (Sedivy 1999 Tables 7, 11 also show target-look acceleration; this is the dual of ContrastReducesCompetitorLooks.)

                                  Equations
                                  • One or more equations did not get rendered due to their size.
                                  Instances For

                                    Studies that cross multiple within-subject factors (e.g. @cite{ronderos-etal-2024}'s 2 × 3 contrast × adjective-type design) need to express the contrast-effect predicates per stratum rather than universally over all cells — Ronderos finds a contrast effect for color and scalar adjectives but not for material adjectives, so a universal claim over the cell type is empirically false.

                                    The When variants below take a sub-cell predicate P : Cell → Prop and restrict the contrast-effect claim to cells satisfying P. The universal-form predicates of §5 are the special case P := fun _ => True (made explicit in ContrastReducesRoleLooks_iff_when_True).

                                    The contrastEffect accessor projects out the difference between the no-contrast and contrast values at a given cell — the paradigm-level shape of the "target advantage" or "competitor reduction" measure that downstream studies report. With [Sub R] we can quantitatively compare effects across strata via ContrastEffectLargerFor, which is the qualitative shape of an "X × condition" interaction. The predicates here only constrain qualitative relationships (strict inequality between matched cells); statistical readings ("the mean β is larger for color than material") need a real-valued aggregator and are out of scope for the paradigm contract.

                                    def Paradigms.VisualWorld.contrastEffect {Cell R : Type} [HasContrastCondition Cell] [Sub R] (role : ObjectRole) (looks : LookProportion Cell R) (c : Cell) :
                                    R

                                    Effect size of the contrast manipulation on role looks at cell c: the difference between the no-contrast and contrast values. Positive when contrast strictly reduces role looks (the canonical direction in same-category contrast paradigms). The lens laws on setContrast ensure this depends only on the non-contrast factors of c.

                                    Equations
                                    • One or more equations did not get rendered due to their size.
                                    Instances For
                                      def Paradigms.VisualWorld.ContrastReducesRoleLooksWhen {Cell R : Type} [HasContrastCondition Cell] [LT R] (P : CellProp) (role : ObjectRole) (looks : LookProportion Cell R) :

                                      Stratified ContrastReducesRoleLooks: the role-look reduction holds for every cell satisfying P. The original universal-form predicate is ContrastReducesRoleLooksWhen (fun _ => True).

                                      Equations
                                      • One or more equations did not get rendered due to their size.
                                      Instances For
                                        def Paradigms.VisualWorld.ContrastEffectLargerFor {Cell R : Type} [HasContrastCondition Cell] [Sub R] [LT R] (role : ObjectRole) (P Q : CellProp) (looks : LookProportion Cell R) :

                                        Interaction predicate: the contrast effect on role looks is strictly larger in the P stratum than in the Q stratum. The paradigm-level shape of an "X × condition" interaction — e.g. @cite{ronderos-etal-2024}'s adjective-type × contrast interaction, where color/scalar cells show a contrast effect that material cells lack. Universal quantification over both strata is the strongest qualitative reading: every P-cell's effect strictly exceeds every Q-cell's effect. Mean-over-mean readings need a real-valued aggregator; the paradigm only commits to the strict-pairwise qualitative shape.

                                        Equations
                                        • One or more equations did not get rendered due to their size.
                                        Instances For
                                          theorem Paradigms.VisualWorld.ContrastReducesRoleLooks_iff_when_True {Cell R : Type} [HasContrastCondition Cell] [LT R] (role : ObjectRole) (looks : LookProportion Cell R) :
                                          ContrastReducesRoleLooks role looks ContrastReducesRoleLooksWhen (fun (x : Cell) => True) role looks

                                          The original universal-form ContrastReducesRoleLooks is the True-stratum case of ContrastReducesRoleLooksWhen. Studies that move to the stratified API can recover their existing universal claims by passing the trivial filter.

                                          Several visual-world studies report a baseline analysis comparing overall fixations on a subset of roles in the no-contrast (or other designated baseline) condition across strata. Examples:

                                          The pattern is: pick a baseline value of the contrast factor, sum looks across a designated subset of roles (target + competitor in the papers above), and compare the sums between strata. The predicate below packages this. The role-sum is computed via List.sum so it generalises to any subset; the baseline is the contrast condition the sum is computed in.

                                          def Paradigms.VisualWorld.roleSum {Cell R : Type} [Add R] [Zero R] (roles : List ObjectRole) (looks : LookProportion Cell R) (c : Cell) :
                                          R

                                          Sum of looks across a list of roles at cell c. Generalises the "looks at target + competitor" projection used in baseline analyses.

                                          Equations
                                          Instances For
                                            def Paradigms.VisualWorld.RoleSumLowerInBaselineWhen {Cell R : Type} [HasContrastCondition Cell] [Add R] [Zero R] [LT R] (baseline : ContrastCondition) (roles : List ObjectRole) (P Q : CellProp) (looks : LookProportion Cell R) :

                                            In the baseline contrast condition, the role-sum for roles is strictly smaller for cells satisfying P than for cells satisfying Q. Captures Aparicio-style "stratum X requires more processing" findings expressed as reduced fixations on the critical-role subset. The lens fixes the contrast factor to baseline so the comparison isolates the stratum effect from any contrast effect.

                                            Equations
                                            • One or more equations did not get rendered due to their size.
                                            Instances For