Mixed Quotation #
@cite{kirk-giannini-2024}
Formal apparatus for overt and covert mixed quotation, following Kirk-Giannini 2024 "Covert mixed quotation" (Semantics & Pragmatics 17).
Core Idea #
Mixed quotation is a compositional interaction between pure quotation and a covert mixed quotation operator π. A mixed-quoted expression simultaneously:
- Used: contributes its at-issue semantic value to composition
- Mentioned: peripherally entails that some salient speaker produced an utterance of that expression
The theory introduces four covert operators:
- π (mixed quotation): at-issue β¨*β© meaning + peripheral R attribution
- β (shunting): moves peripheral content to the at-issue dimension
- β (diagonalizer): shifts β¨*β© to evaluate at the world of evaluation
- π (appropriateness): modalizes peripheral content via β
These operators unify five phenomena: CI projection failure, c-monsters, metalinguistic negation, metalinguistic negotiation, and "in a sense" constructions.
Connection to Existing Infrastructure #
The TwoDimProp type from @cite{potts-2005} provides the at-issue Γ
peripheral carrier. pureQuote (added to TwoDimProp) blocks CI
projection under quotation. The operators here compose over TwoDimProp.
The quotative interpretation function β¨β© β implemented as QuotInterp
below β is from @cite{shan-2010}. K-G writes (paper p.12, p.15):
"Drawing on Shan, I implement this proposal about the at-issue
contribution of mixed-quoted items using a purpose-built quotative
interpretation function β¨β©." (β) is therefore Shan's; K-G's
contribution is the covert apparatus π, β, β , π layered on top.
Flat (TwoDimProp) vs. Layered (MQProp) Model #
Two carriers are exposed:
Flat
TwoDimProp(at-issue Γ ci): Potts 2005's original bi-dimensional architecture.applyAppropREPLACES the ci dimension with appropriateness content β the original R-attribution from π is overwritten when π fires. Sufficient for at-issue truth-conditional predictions; cannot record that the utterance attribution survives embedding.Layered
MQProp(at-issue Γ R-content Γ appropContent): refines the peripheral dimension into two distinct layers.applyMQwrites to R;applyAppropwrites to β;shuntmoves β to at-issue;negpreserves both peripheral layers. Crown theoremfull_chain_preserves_rContent: R survives the full π β π β β β Β¬ chain.
When to use which. Flat for at-issue-truth-conditional predictions where R is irrelevant (e.g. LoGuercio2025's CI work). Layered when the prediction is about R-survival or when both peripheral dimensions matter independently (K-G Β§3 metalinguistic negation, Β§1 strip-then-mix observation that π introduces R-attribution while π leaves it intact).
Bridge: MQProp.toFlat projects the layered model down by discarding
R-content and using β-content as the flat ci. flat_agreement_atIssue
and flat_loses_rContent quantify the agreement and the information
loss of the projection.
The quotative interpretation function β¨*β©.
Maps an expression q, a world of utterance wβ, and a speaker s
to the extension of q as uttered by s at wβ, evaluated at
world of evaluation wβ.
β¦β¨*β©β§(q, wβ, s, wβ) = Ο where Ο is the extension at wβ of
the intension contributed by an utterance of q by s at wβ.
Equations
- Semantics.Quotation.QuotInterp Expr Speaker W = (Expr β W β Speaker β W β Prop)
Instances For
The utterance relation R.
R(s, u, q, w) holds iff speaker s produced utterance u of
expression q at world w. Introduced peripherally by π and
resolved as a discourse anaphor.
Equations
- Semantics.Quotation.UttRel Speaker Utt Expr W = (Speaker β Utt β Expr β W β Prop)
Instances For
A mixed quotation context: the discourse-anaphoric parameters and interpretation functions needed to evaluate mixed quotation.
The speaker sx and utterance ux are free variables resolved by
discourse anaphora β they pick out the salient individual who produced
the quoted material and the utterance event in which they did so.
- interp : QuotInterp Expr Speaker W
Quotative interpretation function β¨*β©
- uttRel : UttRel Speaker Utt Expr W
Utterance relation R
- sx : Speaker
Anaphorically retrieved speaker
- ux : Utt
Anaphorically retrieved utterance
- wc : W
World of context
Instances For
Apply the mixed quotation operator π to an expression.
Returns a TwoDimProp with:
- at-issue:
β¨qβ©(wc)(sx)β the extension as used by the speaker at the world of context - peripheral:
R(sx, ux, q)β the speaker produced this utterance
This is the core of the theory: mixed quotation arises compositionally from these two semantic contributions of π.
Equations
Instances For
The shunting operator β: moves peripheral content to the at-issue dimension by conjoining it with at-issue content.
After shunting, the at-issue content becomes p.atIssue β§ p.ci
and peripheral content becomes trivial.
This operator is independently motivated (@cite{potts-2007},
McCready 2010) and is what allows peripheral content from mixed
quotation to interact with higher at-issue operators like negation
and conditionals. In the Writer monad architecture for CI effects
(see Theories.Semantics.Composition.Effects.twoDimToWriter),
shunting corresponds to running the Writer by folding the CI log
into the value via conjunction (see runCIWriter and
runCIWriter_twoDim in Theories.Semantics.Composition.Effects).
Equations
- Semantics.Quotation.shunt p = { atIssue := fun (w : W) => p.atIssue w β§ p.ci w, ci := fun (x : W) => True }
Instances For
Shunting conjoins both dimensions into at-issue.
Shunting trivializes peripheral content.
Shunting is idempotent on the at-issue dimension: once peripheral content has been consumed, shunting again has no effect.
The diagonalizer β : shifts the quotative interpretation so that at
the world of evaluation w, the expression's meaning is what it
would be as uttered by the speaker at w (rather than at wc).
This captures c-monstrous behavior without positing actual context
monsters. "If Pluto were a planet" accesses the meaning 'planet'
would have if conventions were different β because the diagonalizer
evaluates β¨planetβ©(w)(s) at the counterfactual world w where
conventions still classify Pluto as a planet.
Formally: β (f) = f* where f*(q)(w) = β¨qβ©(w)(s)(w) β the world
of utterance and world of evaluation collapse.
Equations
- Semantics.Quotation.diagonalize interp s q w = interp q w s w
Instances For
Diagonalization collapses world of utterance and evaluation.
K-G's diagonalizer β with the β-over-speakers.
The bare diagonalize above is parameterized on a single speaker β it
captures only the world-collapse half of K-G's footnote 22 definition
(paper p.26):
β β€³ Ξ»f. βs : f = Ξ»q. β¨β©(q)(w_c)(s).f
The βs quantifies over speakers/communities producing the variant
function f* (the world-of-evaluation form f*(s) := Ξ»q. β¨*β©(q)(w)(s)).
This existential is what makes c-monsters work: "Pluto could have been a
planet" is true when there EXISTS a speaker whose use of 'planet'
includes Pluto under the diagonalized reading β not just when the actual
speaker's use does.
The bare diagonalize is the per-speaker witness; diagonalizeKG adds
the existential. Bridge: diagonalizeKG_iff_exists_diagonalize.
Equations
- Semantics.Quotation.diagonalizeKG interp q w = β (s : Speaker), interp q w s w
Instances For
diagonalizeKG is the existential closure of diagonalize over speakers.
K-G's footnote 22 well-definedness condition.
For f* to be well-defined as a function (rather than a relation), no two
speakers may agree on extensions of all expressions at the world of
context wc while disagreeing on extensions at some other world.
Paper p.26 footnote: "I assume here that there are no two speakers or
linguistic communities which assign the same extensions to all expressions
in w_c but assign different extensions to some expressions in other
worlds."
This is a global structural property of interp β it's about how speakers
relate across worlds. Without it, the existential in diagonalizeKG
overgenerates (any two speakers can act as witnesses for incompatible
diagonal contents at the same world). K-G accepts this assumption to keep
the semantics deterministic; it is independently violable.
Equations
- interp.fn22Wellformed wc = β (s s' : Speaker), (β (q : Expr) (w : W), interp q wc s w β interp q wc s' w) β β (q : Expr) (w : W), interp q w s w β interp q w s' w
Instances For
Under fn22-wellformedness, speakers who agree on extensions at wc
across the entire vocabulary agree on diagonal extensions everywhere.
This is the substantive use of fn22Wellformed: it lifts wc-extensional
agreement to global agreement, which is what makes the existential in
diagonalizeKG deterministic.
An appropriateness standard: given a speaker and expression at a world, whether it is or would be appropriate for that speaker to use that expression.
This is the semantic content of the appropriateness modal β in Kirk-Giannini's system. The modal quantifies over an accessibility relation, but for finite models we represent the result directly.
Equations
- Semantics.Quotation.AppropStandard Speaker Expr W = (Speaker β Expr β W β Prop)
Instances For
The appropriateness operator π: replaces the peripheral content of a mixed-quoted expression with appropriateness content.
In the paper's compositional chain, π operates on π's output: π(q) β π β β β Β¬. The at-issue content from π passes through unchanged; only the peripheral dimension is replaced with the proposition that the verbatim use of the expression is or would be appropriate. This is the key ingredient for metalinguistic negation: when β shunts this appropriateness content to at-issue and negation scopes over the result, we get "not (p β§ appropriate-to-say-q)".
Equations
- Semantics.Quotation.applyApprop approp sx q p = { atIssue := p.atIssue, ci := approp sx q }
Instances For
π preserves at-issue content: it only replaces the peripheral dimension.
π replaces the peripheral dimension with appropriateness content.
Metalinguistic negation truth conditions.
Negating a shunted appropriateness-enhanced mixed quotation yields:
Β¬(at-issue-meaning β§ appropriate-to-use-expression).
This is the core prediction for metalinguistic negation: "I didn't manage to trap two MONGEESE" is true iff it's not the case that (I managed to trap two mongooses AND it's appropriate to call them 'mongeese'). Since the second conjunct is false (it's not appropriate), the negation is true even though I did manage to trap two mongooses.
The affirmed conjunct in metalinguistic negation.
In "I didn't trap two MONGEESE β I trapped two MONGOOSES", the second clause entails the at-issue content of the first. So the negation is understood as targeting the appropriateness conjunct: it's not appropriate to use 'mongeese'.
Pure quotation composes with π: the expression is first purely quoted (stripping its original CI content), then π re-introduces peripheral content attributing the utterance to the speaker.
This explains why CI items (expressives, slurs, NRRCs) don't project out of indirect speech reports: the material is first pure-quoted (stripping CIs) before being mixed-quoted (adding speaker attribution).
Two-Layer Peripheral Content #
The flat TwoDimProp model has a single peripheral dimension, which
forces π to replace the R-content with β-content. This breaks
Writer monotonicity: the log is overwritten rather than appended.
The non-monotonicity is an artifact of collapsing two genuinely distinct peripheral layers into one field:
- R-peripheral: utterance attribution
R(s,u,q). Always projects to the discourse root. Never shunted. Never targeted by negation. Resolved as a discourse anaphor. - β-peripheral: appropriateness
β(s,q). Can be shunted by β into the at-issue dimension. Targetable by negation after shunting.
In the two-layer model, π writes to the R-layer and π writes to the β-layer. No replacement β both layers are independently append-only. The key structural results:
- Layer preservation: each operator preserves the layer it doesn't target (R persists through π, β, Β¬)
- Shunting conservation: β is information-conservative β total content across all layers is invariant under shunting
- Flat agreement: the flat
TwoDimPropmodel is a projection of the two-layer model that agrees on at-issue content - Per-layer monotonicity: each layer satisfies Writer-style append-only behavior
π on the two-layer type: at-issue β β¨*β©(q), R-layer β R(s,u,q), β-layer trivial (no appropriateness content yet).
Equations
Instances For
π on the two-layer type: writes to the β-layer only. R-content and at-issue are preserved β no log replacement.
Equations
- Semantics.Quotation.MQProp.applyApprop approp sx q p = { atIssue := p.atIssue, rContent := p.rContent, appropContent := fun (w : W) => p.appropContent w β§ approp sx q w }
Instances For
β on the two-layer type: conjoins β-content into at-issue. R-content is preserved β shunting is selective.
Equations
Instances For
Negation on the two-layer type: negates at-issue only. Both peripheral layers are preserved.
Equations
Instances For
π preserves R-content.
π preserves at-issue content.
π appends appropriateness to β-content (no replacement).
β conjoins β-content into at-issue.
β trivializes β-content (the layer is "drained").
Β¬ preserves β-content.
Β¬ preserves both peripheral dimensions (combined ergonomic lemma
bundling neg_preserves_rContent and neg_preserves_appropContent).
π leaves the β-layer trivial β appropriateness has not been written yet.
R-content persists through the full metalinguistic negation chain π β π β β β Β¬. The utterance attribution is never lost.
Metalinguistic negation truth conditions in the layered model.
The MQProp-side counterpart of the flat-model
Semantics.Quotation.metalinguistic_neg_truth_conditions
(line 219 above). At-issue content of the full chain is
Β¬(at-issue β§ appropriate), identical to the flat model β but
the layered model ALSO retains the R-attribution
(per full_chain_preserves_rContent), which the flat model
discards.
Shunting conservation. The total information content β the conjunction of all three layers β is invariant under β.
Shunting doesn't destroy β-content; it relocates it from the β-layer to the at-issue layer. The β-layer becomes trivial, but the information is preserved in the at-issue conjunction. No content is created or destroyed β only moved.
This is the crown theorem of the two-layer analysis: it shows that the apparent non-monotonicity of the flat model is an illusion. When the layers are properly separated, every operation preserves total information (negation inverts at-issue, but that's intentional semantic content, not information loss).
Project the two-layer model to the flat TwoDimProp by discarding
R-content and using β-content as the CI dimension.
This projection is exact after π: the flat model's "replaced" CI is the β-layer of the two-layer model. The R-content, discarded here, is the information that the flat model loses.
Equations
- p.toFlat = { atIssue := p.atIssue, ci := p.appropContent }
Instances For
The flat projection uses β-content as the CI dimension (R-content discarded).
The full chain on the two-layer model agrees with the flat model on at-issue content. The two models diverge only in what happens to R-content: the layered model preserves it, the flat model discards it.
What the flat model loses: R-content is present in the layered model but absent in the flat projection.
After metalinguistic negation ("I didn't trap two MONGEESE"), the layered model records that the utterance 'mongeese' was produced (R is true). The flat model has no trace of this β the R-content was overwritten by β-content when π was applied.