Documentation

Linglib.Phenomena.TenseAspect.Studies.Koev2017

@cite{koev-2017}: Bulgarian Evidentials and Spatiotemporal Distance #

The Bulgarian evidential (-l participle) is felicitous when the speaker's evidence acquisition is spatiotemporally distant from the described event: either temporally non-overlapping (standard indirect evidence) or spatially distant (same time, different place). Direct witness (same time, same place) is infelicitous. Plus: the evidential contribution projects past negation/modals (not-at-issue) and the speaker is fully committed to the proposition (non-modal analysis, contra @cite{izvorski-1997}).

Main definitions #

TODO #

References #

Spatiotemporal Overlap Types #

Whether the described event and the learning event overlap in time.

Instances For
    @[implicit_reducible]
    Equations
    Equations
    • One or more equations did not get rendered due to their size.
    Instances For

      Whether the described event and the learning event share a location.

      Instances For
        @[implicit_reducible]
        Equations
        Equations
        • One or more equations did not get rendered due to their size.
        Instances For

          Evidential Datum Structure #

          An evidential felicity datum from @cite{koev-2017}. Each records the spatiotemporal configuration of the described event and the learning event, and whether the evidential is felicitous.

          • temporal : TemporalOverlap

            Temporal overlap between described and learning events

          • spatial : SpatialRelation

            Spatial relation between described and learning events

          • evidentialFelicitous : Bool

            Whether the Bulgarian evidential is felicitous in this configuration

          • exampleNum : String

            Example number in @cite{koev-2017}

          Instances For
            Equations
            • One or more equations did not get rendered due to their size.
            Instances For
              Equations
              Instances For

                Core △ Data (@cite{koev-2017}, §4) #

                (3)/(25a): Standard indirect evidence — speaker was not present when the event occurred. Non-overlapping in time, same place. Felicitous.

                Equations
                Instances For

                  Direct witness — speaker perceived the event as it happened. Overlapping in time, same place. Infelicitous.

                  Equations
                  Instances For

                    (25b): Smoke from chimney — speaker perceives evidence of the event from a different location, at the same time. Overlapping in time, different place. Felicitous — spatial distance suffices.

                    Equations
                    Instances For

                      Commitment and Projection Data #

                      The evidential does not weaken commitment: "EV(p) and I know because I was there" is not contradictory (unlike a modal which would predict contradiction). @cite{koev-2017}, §3.

                      Equations
                      Instances For

                        The evidential contribution projects past negation: "It is not the case that Ivan EV-came" presupposes indirect evidence while negating the proposition. @cite{koev-2017}, §5.

                        Equations
                        Instances For

                          △ Felicity Generalization #

                          △ predicts felicity: the evidential is felicitous iff the described event and the learning event are spatiotemporally distant (temporally non-overlapping or spatially distant).

                          Equations
                          • One or more equations did not get rendered due to their size.
                          Instances For

                            Data Verification #

                            theorem Koev2017.core_count :
                            coreData.length = 3

                            There are 3 core data points.

                            △ correctly predicts felicity for all core data points.

                            Bridge: Connecting to Linglib Infrastructure #

                            Bridge theorems connecting @cite{koev-2017}'s spatiotemporal distance analysis to existing linglib infrastructure, organized around the paper's four properties (property 6):

                            Plus structural bridges:

                            Central Claim: Learning Events #

                            The paper's deepest contribution is ontological: evidentials introduce a learning event e_l — the event through which the speaker acquired the reported information. The formal representation (74b):

                            ∃e_l ∧ learn_{cs(k)}(e_l, sp(k), p) ∧ τ(e_l) ≤ time(k) ∧ e △ e_l

                            The learn predicate is subscripted with cs(k) (context set), not with p (scope proposition). This is the formal mechanism for not-at-issue status: the evidential restricts the context set directly (≈ presupposition), while the assertion commits the speaker to p via DECL (72).

                            Spatiotemporal distance △ substrate (inlined) #

                            @cite{koev-2017} Definition 24: two events satisfy △ when either their temporal traces don't overlap (standard indirect evidence) or they occur at different locations (smoke-from-chimney scenario). Inlined from former Theories/Semantics/Events/SpatiotemporalDistance.lean — single-consumer (this file) substrate, paper-anchored entirely on Koev 2017. Architectural note: Event Time lacks a built-in location field, so △ is parameterized by an external loc : Event Time → L.

                            def Koev2017.temporallyDisjoint {Time : Type u_1} [LinearOrder Time] (e₁ e₂ : Semantics.Events.Event Time) :

                            Two events are temporally disjoint when their temporal traces do not overlap (Koev 2017, first disjunct of Def. 24).

                            Equations
                            Instances For
                              def Koev2017.spatiotemporallyDistant {Time : Type u_1} [LinearOrder Time] {L : Type u_2} [DecidableEq L] (loc : Semantics.Events.Event TimeL) (e₁ e₂ : Semantics.Events.Event Time) :

                              Spatiotemporal distance △ (Koev 2017, Def. 24). Two events are spatiotemporally distant if either their temporal traces don't overlap or they occur at different locations.

                              Equations
                              Instances For
                                theorem Koev2017.temporallyDisjoint_of_precedes {Time : Type u_1} [LinearOrder Time] (e₁ e₂ : Semantics.Events.Event Time) (h : e₁.τ.precedes e₂.τ) :

                                If e₁ temporally precedes e₂, they are temporally disjoint (standard indirect evidence: described event finished before learning event started).

                                theorem Koev2017.disjoint_earlier_implies_isBefore {Time : Type u_1} [LinearOrder Time] (e₁ e₂ : Semantics.Events.Event Time) (hd : temporallyDisjoint e₁ e₂) (hearlier : e₁.τ.start e₂.τ.start) :
                                e₁.τ.isBefore e₂.τ

                                If two events are temporally disjoint and the first starts no later than the second, then the first is temporally before the second: bridges Koev's event-based △ to Cumming's point-based T ≤ A.

                                theorem Koev2017.overlapping_not_disjoint {Time : Type u_1} [LinearOrder Time] (e₁ e₂ : Semantics.Events.Event Time) (h : e₁.τ.overlaps e₂.τ) :
                                ¬temporallyDisjoint e₁ e₂

                                Overlapping runtimes are incompatible with temporal distance.

                                theorem Koev2017.spatiotemporallyDistant_of_different_location {Time : Type u_1} [LinearOrder Time] {L : Type u_2} [DecidableEq L] (loc : Semantics.Events.Event TimeL) (e₁ e₂ : Semantics.Events.Event Time) (h : loc e₁ loc e₂) :

                                Spatial distance alone suffices for △ (Koev 2017, ex. 25b).

                                theorem Koev2017.spatiotemporallyDistant_of_temporallyDisjoint {Time : Type u_1} [LinearOrder Time] {L : Type u_2} [DecidableEq L] (loc : Semantics.Events.Event TimeL) (e₁ e₂ : Semantics.Events.Event Time) (h : temporallyDisjoint e₁ e₂) :

                                Temporal disjointness alone suffices for △.

                                Learning Scenarios (@cite{koev-2017}, §4) #

                                structure Koev2017.LearningScenario (Time : Type u_1) [LinearOrder Time] :
                                Type u_1

                                A learning scenario: the evidential introduces a learning event e_l — the event through which the speaker acquired the reported information — paired with the described event e.

                                The paper's representation (74b): ∃e_l ∧ learn_{cs(k)}(e_l, sp(k), p) ∧ τ(e_l) ≤ time(k) ∧ e △ e_l

                                The cs(k) Subscript #

                                The learn predicate is subscripted with cs(k) (the context set at discourse move k), not with the scope proposition p. This is the formal mechanism for not-at-issue status: the evidential contribution restricts the context set directly (≈ presupposition in PrProp.presup), while the assertion commits the speaker to p via DECL (72), which maps to PrProp.assertion.

                                The mapping is:

                                • learn_{cs(k)}(e_l, sp(k), p)PrProp.presup (restricts cs)
                                • DECL(72): dc^sp(c) ⊆ pPrProp.assertion (commits to p)

                                This explains why the evidential projects past negation (property 6iv): PrProp.neg preserves presup while negating assertion.

                                What's Captured #

                                What's Not Captured (Future Work) #

                                • The learn predicate itself: We don't model the knowledge-change semantics of learn(e_l, sp(k), p). This would require time-indexed epistemic states: K_sp(p, t) ∧ ¬K_sp(p, t') for t' < τ(e_l).
                                • Propositional content p: The structure pairs events but doesn't carry the proposition learned. Adding p : W → Bool would require a world type parameter constraining downstream usage.
                                • Speech time constraint: τ(e_l) ≤ time(k) ensures the learning event is past. This interacts with tense morphology (the L-participle is morphologically past) but is not modeled here.
                                • Evidence source typology: The paper distinguishes reportative, inferential, and assumptive evidence (§5) via different learn predicates. We collapse these into a single △ constraint.
                                Instances For
                                  def Koev2017.LearningScenario.isTemporallyDisjoint {Time : Type u_1} [LinearOrder Time] (s : LearningScenario Time) :

                                  △ holds for this scenario (temporal component): the described and learning events have non-overlapping temporal traces.

                                  Equations
                                  Instances For
                                    def Koev2017.LearningScenario.isSpatiotemporallyDistant {Time : Type u_1} [LinearOrder Time] {L : Type u_2} [DecidableEq L] (loc : Semantics.Events.Event TimeL) (s : LearningScenario Time) :

                                    △ holds for this scenario (full spatiotemporal version): temporal disjointness OR spatial distance.

                                    Equations
                                    Instances For

                                      Computable temporal △ for ℤ events: ¬(τ(e) overlaps τ(e_l)). Since integer comparison is decidable, we can evaluate △ from the event structure directly.

                                      Equations
                                      Instances For

                                        triangleTemporalB agrees with the propositional isTemporallyDisjoint: the Bool computation and the Prop predicate coincide for ℤ events.

                                        Construct a PrProp from a learning scenario, making the cs(k) → presup mapping constructive.

                                        The presupposition is derived from the event structure (△ holds or not), and the assertion is the scope proposition p (committed via DECL). This is the concrete realization of Koev's (74b):

                                        • presup := △(described, learning) — the evidential's cs(k) contribution
                                        • assertion := p — the scope proposition
                                        Equations
                                        Instances For

                                          Concrete Scenarios #

                                          Described event: interval [0, 5].

                                          Equations
                                          Instances For

                                            Learning event (indirect): interval [10, 15] — strictly later.

                                            Equations
                                            Instances For

                                              Learning event (direct witness): interval [2, 4] — overlaps described.

                                              Equations
                                              Instances For

                                                Learning event (spatial distance): interval [0, 5] — same time, different place (smoke from chimney).

                                                Equations
                                                Instances For

                                                  Indirect evidence scenario: described event [0,5], learning event [10,15].

                                                  Equations
                                                  Instances For

                                                    Smoke-from-chimney scenario: described event [0,5], learning event [0,5] at a different location.

                                                    Equations
                                                    Instances For

                                                      Property (i): Spatiotemporal Meaning — △ #

                                                      Indirect evidence: described and learning events are temporally disjoint — described event [0,5] finished before learning event [10,15] started. △ satisfied via temporal disjointness.

                                                      Direct witness: described event [0,5] and learning event [2,4] overlap. They are NOT temporally disjoint — △ fails (when also co-located).

                                                      The smoke scenario events temporally overlap — temporal disjointness alone does NOT yield △ here.

                                                      Despite temporal overlap, any location function assigning different locations to the described and learning events yields △. This captures the smoke-from-chimney scenario (§4): spatial distance suffices.

                                                      △ vs. Temporal Ordering (Independent Constraints) #

                                                      The paper separates two constraints in (74b): - e △ e_l : spatiotemporal distance (the evidential's contribution) - τ(e) < τ(e_l) : temporal ordering (the past tense's contribution)

                                                      These are independent: △ can hold via spatial distance alone (smoke
                                                      scenario has △ without temporal ordering), and temporal ordering is
                                                      imposed by tense morphology, not the evidential. 
                                                      

                                                      Temporal ordering: the described event PRECEDES the learning event. This is the past tense's contribution, NOT the evidential's. Paper (74b): τ(e) < τ(e_l).

                                                      The smoke scenario has NO temporal ordering (events are simultaneous), yet △ holds via spatial distance. This demonstrates that △ and temporal ordering are independent constraints.

                                                      The Four Properties (Derived from toEvidentialProp) #

                                                      All four properties (property 6) follow from toEvidentialProp:

                                                      - **(i) Spatiotemporal meaning**: presup = △(described, learning),
                                                        derived from event structure via `triangleTemporalB`
                                                      - **(ii) Speaker commitment**: assertion = p (non-modal, full commitment)
                                                      - **(iii) Not at issue**: △ is in presup (cs restriction), not assertion
                                                      - **(iv) Projection**: PrProp.neg preserves presup → △ projects past ¬ 
                                                      

                                                      Property (6i): the presupposition of the constructed PrProp IS the △ condition, derived from the event structure. When △ holds (indirect evidence), the presupposition is satisfied at every world.

                                                      When △ fails (direct witness), the presupposition fails — the evidential sentence is undefined (infelicitous).

                                                      Equations
                                                      Instances For
                                                        theorem Koev2017.direct_presup_fails {W : Type u_1} (p : WProp) (w : W) :
                                                        theorem Koev2017.assertion_is_scope (s : LearningScenario ) {W : Type u_1} (p : WProp) :

                                                        Property (6ii): the assertion of a scenario's PrProp IS the scope proposition. The speaker commits to p, not to a modalized version. This holds by construction: DECL (72) maps to PrProp.assertion.

                                                        def Koev2017.modalEvidential {W : Type u_1} (evidence : Bool) (must_p : WProp) :

                                                        A modal evidential would assert □_e(p) — "p must be true given evidence e" — a DIFFERENT proposition from p.

                                                        This is a simplified stub; the full Kratzer-grounded version is Izvorski1997.Bridge.izvorskiEv, which uses necessity f g p as the assertion and !(accessibleWorlds f w).isEmpty as the presup.

                                                        Equations
                                                        Instances For
                                                          theorem Koev2017.modal_can_weaken :
                                                          ∃ (p : UnitProp) (must_p : UnitProp), (indirectScenario.toEvidentialProp p).assertion (modalEvidential true must_p).assertion

                                                          The modal analysis CAN weaken the assertion: there exist instantiations where the modal's assertion diverges from the scope proposition, while Koev's assertion is always p by construction.

                                                          Property (6iv): the evidential presupposition projects past negation. Negating the evidential negates the assertion (p → ¬p) but preserves the presupposition (△). This follows from PrProp's general negation rule and captures the paper's formalization (78).

                                                          Bridge to @cite{cumming-2026}: △ → T ≤ A #

                                                          For the indirect evidence case, temporal disjointness + ordering gives isBefore: τ(e).finish ≤ τ(e_l).start.

                                                          Construct Cumming's EvidentialFrame from the learning scenario: T = τ(e).finish, A = τ(e_l).start. This bridges Koev's event-based analysis to Cumming's point-based (S, A, T) frame.

                                                          Equations
                                                          • One or more equations did not get rendered due to their size.
                                                          Instances For

                                                            Cumming's downstream evidence (T ≤ A) holds for the indirect frame — the temporal special case of Koev's △.

                                                            Bridge to Existing Fragment #

                                                            The existing Bulgarian nfutL entry has EP = downstream (T ≤ A), which is the temporal special case of Koev's △: when spatial distance is not at play, △ reduces to temporal disjointness, and temporal disjointness + described-before-learning gives T ≤ A.