Quantifier Composition via Predicate Abstraction #
Demonstrates that interp composes quantificational sentences
end-to-end: lexicon → syntax tree (with QR traces and binders) →
truth conditions.
H&K Pipeline (@cite{heim-kratzer-1998} Ch. 5) #
After Quantifier Raising moves a DP to a higher position, it leaves a
trace tₙ and creates a binder node n. Predicate Abstraction (PA)
converts the binder + body into λx. ⟦body⟧^{g[n↦x]}, producing a
predicate that the raised quantifier takes as its scope argument.
"Every student sleeps" after QR:
[S [DP [D every] [N student]] [1 [S [t₁] [VP sleeps]]]]
Evaluated as:
⟦t₁⟧^g = g(1)(Traces rule)⟦sleeps⟧ = sleeps'(TN)⟦[t₁ sleeps]⟧^g = sleeps'(g(1))(FA)⟦[1 [t₁ sleeps]]⟧^g = λx. sleeps'(x)(PA)⟦every student⟧ = every'(student')(FA)⟦S⟧ = every'(student')(λx. sleeps'(x))(FA)
Scope Ambiguity #
"Everybody loves somebody" yields two readings from two QR structures:
- Surface scope (∀>∃): every person scopes above some person
- Inverse scope (∃>∀): some person scopes above every person
The two trees differ only in which quantifier is raised higher.
Note on Prop-valued .t #
With interpTy .t = Prop, the compositional semantics produces Prop-valued
truth conditions directly. Theorems verify these truth conditions at the Prop
level rather than via evalTree (which requires a blanket Decidable instance
for ∀ (p : Prop), Decidable p).
Equations
- One or more equations did not get rendered due to their size.
- Semantics.Composition.QuantifierComposition.quantLex "sleeps" = some { ty := Core.Logic.Intensional.Ty.e ⇒ Core.Logic.Intensional.Ty.t, denot := Semantics.Montague.ToyLexicon.sleeps_sem }
- Semantics.Composition.QuantifierComposition.quantLex "laughs" = some { ty := Core.Logic.Intensional.Ty.e ⇒ Core.Logic.Intensional.Ty.t, denot := Semantics.Montague.ToyLexicon.laughs_sem }
- Semantics.Composition.QuantifierComposition.quantLex word = none
Instances For
QR tree: [S [DP every student] [1 [S t₁ sleeps]]]
Equations
- One or more equations did not get rendered due to their size.
Instances For
Every student sleeps is false (Mary is a student but doesn't sleep).
QR tree: [S [DP some student] [1 [S t₁ sleeps]]]
Equations
- One or more equations did not get rendered due to their size.
Instances For
Some student sleeps = true (John is a student and sleeps).
Two QR structures yield two scope readings. The trees differ only in which quantifier occupies the higher position.
Surface scope (∀>∃):
[S [DP every person] [1 [S [DP some person] [2 [S t₁ [VP sees t₂]]]]]]
∀x[person(x) → ∃y[person(y) ∧ sees(x,y)]]
Equations
- One or more equations did not get rendered due to their size.
Instances For
Inverse scope (∃>∀):
[S [DP some person] [2 [S [DP every person] [1 [S t₁ [VP sees t₂]]]]]]
∃y[person(y) ∧ ∀x[person(x) → sees(x,y)]]
Equations
- One or more equations did not get rendered due to their size.
Instances For
Surface scope Prop.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Inverse scope Prop.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Surface scope is true in the toy model. (John sees Mary and Mary sees John — each person sees some person.)
Inverse scope is false. (No single person is seen by everyone — John doesn't see John, Mary doesn't see Mary.)
The two scope readings differ: proof of genuine ambiguity.
The same QR tree built as Tree Cat String — carrying real UD-grounded
categories on every node. interp ignores the categories and produces
identical truth conditions to the category-free Tree Unit String version.
QR tree with UD categories:
[S [DP [Det every] [N student]] [1 [S [t₁:NP] [VP sleeps]]]]
Equations
- One or more equations did not get rendered due to their size.