Ginzburg (2012): The Interactive Stance #
@cite{ginzburg-2012}
Canonical formalization of the KOS framework from The Interactive Stance: Meaning for Conversation (OUP 2012).
This study consumes the KOS substrate at Theories/Dialogue/KOS/,
which was rebuilt to faithfully match the book's Ch. 6 final shape
(ex. 43 p. 175): DGB takes a Cont parameter, pending stores
LocProp Cont, qud stores InfoStruc QContent Cont, GenreType
carries qnud + anticipatedMoves, Grounding.lean provides the
multi-stage CCURs pipeline (§6.6–6.7), SelfRepair.lean provides the
§8.2 MaxPending operations, NSUTaxonomy.lean provides the 16-class
Sluice-split-faithful Table 7.4 taxonomy.
Sections #
- §1. The Turn-Taking Puzzle (Ch. 2) — KOS's headline argument
- §2. KOS architecture overview (Ch. 4)
- §3. Inquiry cycle, TTR-typed (§4.4.5, via
Austinian.AustinianTIS) - §4. Grounding asymmetry: per-DGB stance (Ch. 4)
- §5. Genre as TTR record (§4.6)
- §6. Non-sentential utterance resolution (Ch. 5)
- §7. NSU taxonomy with Sluice split (Ch. 7, Tables 7.3 + 7.4)
- §8. Clarification request taxonomy (§6.2.1)
- §9. Grounding via the §6.6–6.7 CCURs pipeline
- §10. Self-repair via MaxPending (§8.2)
- §11. End-to-end pipeline: DialogueSign → LocProp → CCURs → DGB
- §12. Cross-framework contrasts vs pre-2012 siblings
Chronology #
@cite{ginzburg-2012} > all references in §12. Cross-framework contrasts with later work (@cite{krifka-2015} commitment-spaces; @cite{anderson-2021} distributional CG) live inside those studies per CLAUDE.md's "no bridge files" + chronological-dependency rule.
What this file does NOT cover #
- Multilogue (Ch. 8 §8.1) — separate study to follow
- Ch. 9 "Interactive Stance" meta-theory — substantive philosophical content
- Quantification and anaphora in dialogue (§8.5)
- The book's substantial discussion of crosslinguistic data (Hebrew sluicing, German short answers, Bielefeld task corpus)
These are separable substudies; the present file covers the framework core that downstream work consumes.
The Turn-Taking Puzzle (Ch. 2 ex. 22–23 p. 23) #
@cite{ginzburg-2012} Ch. 2 sets up the book's headline argument: Equal Access to Context fails (ex. 5 p. 13, ex. 20b p. 21). Two participants exposed to the same utterance do not have access to the same contextual resources for interpreting subsequent fragments — because their roles in the turn structure differ.
The puzzle (ex. 22–23 p. 23, the Why-parakeet example): A says "I own a parakeet"; B says "Why?". The interpretation of B's "Why?" depends crucially on who is keeping the turn:
- If A keeps the turn (B's "Why?" is a back-channel under A's continuing contribution): "Why?" means "Why own a parakeet?" — about A's content
- If B claims the turn: "Why?" can mean "Why are you telling me this?" — about A's act of asserting
Same fragment, same antecedent utterance, different readings — because turn structure is part of the context. This refutes Equal Access to Context: B (the responder) and any third overhearer would have access to different resolution-licensing contexts for the same fragment.
Why this matters architecturally: the TTP forces the grammar to have per-participant access to turn structure (who-spoke-last, who-holds-the-floor). Standard pragmatic-enrichment accounts treat context as a single shared object — exactly what the TTP refutes. KOS's per-participant DGB is the architectural response: each participant's DGB tracks the turn from their own perspective. The QUD-determined-NSU claim of Ch. 5 (formalized in §6 below) is then a consequence of this per-DGB access, not the central TTP claim itself.
A weak structural witness for the TTP: the same fragment string ("Why?") paired with the SAME antecedent utterance ("I own a parakeet") yields TWO distinct resolutions depending on turn-keeping. This is weaker than Ginzburg's full argument (which requires the per-participant DGB substrate), but is enough to refute "fragment + immediate antecedent suffice for resolution" — the missing ingredient is turn structure.
The deeper refutation of Equal Access to Context lives in §4 below
(grounding_asymmetry), which exhibits two participants' DGBs
diverging after the same dialogue trace.
KOS Architecture Overview #
@cite{ginzburg-2012} Ch. 4 introduces the central data structures:
- TIS (Total Information State, ex. 93 p. 107) = public DGB + private state
- DGB (Dialogue Gameboard, ex. 100/113) — each participant has their own:
facts: List Fact — accumulated mutual commitmentsmoves: List IllocMove — history of speech actspending: List LocProp — ungrounded utterances awaiting integrationqud: List InfoStruc — open questions paired with FECs
- Conversational rules (Ch. 4):
ask,assertRule/assertWithQUD,accept,qspec,check,confirm,qcoord,greet— TIS → TIS
The substrate provides all of this in Theories/Dialogue/KOS/. This
section just imports the conventions; subsequent sections exercise them.
TTR-Typed Inquiry Cycle (§4.4.5) #
@cite{ginzburg-2012} Ch. 4 §4.4.5 walks through the canonical
Ask → Assert → Accept dialogue. We exercise the TTR-typed substrate
from KOS/Austinian.lean, which instantiates TIS with
BCheckableAustinian propositions and TTRQuestionB questions over
a Weather situation type.
This is the answer to "are TTR-typed examples reachable?" — yes, the substrate has been built to support them.
KOS's inquiry cycle works at TTR-typed content level: asserting "it is raining" resolves "is it raining?".
The same inquiry cycle in string-typed form (legacy worked example).
Grounding Asymmetry #
@cite{ginzburg-2012} Ch. 4: each participant maintains their own DGB. After A asserts p, A's DGB has p in FACTS. B's DGB does NOT have p in FACTS until B explicitly accepts. This models the difference between assertion (one-sided) and mutual acceptance (synchronization).
This is KOS's most architecturally distinctive claim: there is no single "common ground"; there are coupled-but-distinct private commitment records, synchronized only via Accept. The Stalnaker contrast (§12) makes this divergence Lean-checkable.
Speaker's TIS after asserting "It's raining".
Equations
Instances For
Addressee's TIS before accepting — empty FACTS.
Instances For
After speaker asserts, only the speaker's DGB has the fact.
Addressee's TIS after explicit Accept — now the fact is in addressee's FACTS.
Equations
Instances For
Acceptance synchronizes: addressee's FACTS gains the fact.
Genre Constraints on Moves (§4.6) #
@cite{ginzburg-2012} §4.6: genres are TTR records (ex. 88 p. 104) that classify conversations. The relevance of a move depends on whether the resulting DGB outcome can be anticipated to be fulfilled by the genre's constraints (eq. 90 p. 105).
The substrate GenreType carries the qnud (anticipated questions) and
anticipatedMoves fields from ex. 88. We exercise the
qudConstraint-based variant here for a thin worked example.
A bakery-counter genre: only questions about ordering are "anticipated" by the genre's implicit script.
Equations
- One or more equations did not get rendered due to their size.
Instances For
An unrestricted casual-chat genre.
Equations
- Phenomena.Dialogue.Studies.Ginzburg2012.casualGenre = { name := "casual" }
Instances For
Bakery genre rejects weather small-talk; casual genre accepts it.
The qnud-based variant agrees with the qudConstraint variant on the bakery example: weather questions aren't in the anticipated set.
Non-Sentential Utterance Resolution #
@cite{ginzburg-2012} Ch. 5: bare fragments ("Paris.") are resolved via the QUD. The InfoStruc shape of QUD entries (a question + its focus-establishing constituents) gives the resolution mechanism: a fragment fills the FEC slot of MaxQUD.
We use Phenomena.Ellipsis.FragmentAnswers.FragmentDatum directly
(theory-neutral data) rather than re-stipulating a parallel NSUDatum
type — they encode the same information.
All basic fragment answers have non-empty interpretations (structural well-formedness).
NSU Classification #
@cite{ginzburg-2012} Ch. 7 §7.2 (Tables 7.3–7.4, pp. 221–222) — see @cite{fernandez-2006} for the BNC subcorpus study.
The 16-class taxonomy + 4 functional groupings live in the substrate
(KOS/NSUTaxonomy.lean) — they are framework infrastructure usable by
any KOS-aware study. This section just reuses them. The freqTable
single-source-of-truth + frequency_coherent drift sentry replace the
old aggregate-count theorems.
Substantive correction from earlier formaliser version: the original Ginzburg2012.lean lumped all 24 sluices under "metacommunicative" — a formaliser-imposed cut that didn't match Table 7.4's actual partition (13 Reprise Sluice → metacommunicative, 11 Direct Sluice → extension). The substrate now reflects Table 7.4's split faithfully.
CR Form & Reading Taxonomy #
@cite{ginzburg-2012} Ch. 6 §6.2.1 (pp. 148–149): 8 CR forms × 4 readings. Form variation is morphological/intonational; reading variation is in what the CR is asking about (clausal confirmation, intended content, phonetic repetition, corrective alternative).
Substrate has RFReading (4 readings) at KOS/RepriseContent.lean.
This file declares CRForm locally; both could move to substrate when a
second consumer materializes.
The 8 CR forms from §6.2.1 (p. 148 ex. 1).
- wot : CRForm
"Wot?" / "What?" — most general CR
- literalReprise : CRForm
Literal reprise: exact echo with rising intonation ("Bo?")
- whSubstituted : CRForm
Wh-substituted reprise: echo with wh-word ("Bo did WHAT?")
- repriseSluice : CRForm
Reprise sluice: bare wh-word after antecedent ("Who?")
- repriseFragment : CRForm
Reprise fragment: bare constituent echo ("Bo?")
- gap : CRForm
Gap reprise ("Did __ leave?")
- fillerCR : CRForm
Filler reprise ("Huh?")
- explicit : CRForm
Explicit metalinguistic CR ("What do you mean 'finagle'?")
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- Phenomena.Dialogue.Studies.Ginzburg2012.instDecidableEqCRForm x✝ y✝ = if h : x✝.ctorIdx = y✝.ctorIdx then isTrue ⋯ else isFalse ⋯
All 8 CR forms enumerated.
Equations
- One or more equations did not get rendered due to their size.
Instances For
The 4 CR readings. We re-export the substrate's RFReading from
KOS/RepriseContent.lean rather than re-stipulating a parallel
CRReading enum.
Instances For
Grounding Protocol via CCURs Pipeline #
@cite{ginzburg-2012} §6.6–6.7: the integration protocol for incoming LocProps:
- Pending Update (ex. 45 p. 176): the LocProp enters Pending
- Contextual Instantiation (§6.5 (Ginzburg uses "contextual parameter assignments" terminology)): try to discharge dgb-params by binding indices to witnesses from addressee beliefs
- CCURs (p. 167 footnote 30 + §6.6 ex. 73-86 pp. 192-198): if instantiation fails, apply a Clarification Context Update Rule — Parameter Identification (the default), Confirm, or Repeat — pushing a CR question on QUD
The substrate KOS/Grounding.lean::integrateLocPropCCUR implements
this pipeline. This section exercises it on a worked example.
"Did Jo leave?" as a LocProp with one cparam for "Jo".
Equations
- One or more equations did not get rendered due to their size.
Instances For
A belief base where the addressee knows who Jo is.
Equations
- Phenomena.Dialogue.Studies.Ginzburg2012.knowsJo = [("jo_ref", "Jo")]
Instances For
An empty belief base — the addressee has no idea who Jo is.
Equations
Instances For
CR question constructor for unresolved cparams.
Equations
- Phenomena.Dialogue.Studies.Ginzburg2012.crQuestion p = toString "Who do you mean by '" ++ toString p.index ++ toString "'?"
Instances For
With knowing-Jo beliefs, the LocProp grounds via CCURs (no CR needed).
With empty beliefs, the LocProp triggers Parameter Identification CR.
The CCUR pipeline strictly extends the one-shot stub: when cparams are empty (trivial case), both yield the same grounded result.
Self-Repair via MaxPending #
@cite{ginzburg-2012} §8.2 (pp. 282–290) "Unifying Self- and Other-Correction".
Per §6.3 footnote 31 p. 168 + §8.2: MaxPending is the head of Pending,
not a separate field. The substrate KOS/SelfRepair.lean provides:
pushMaxPending lp— start a new in-construction LocProp at the head of PendingreplaceMaxPending lp'— backwards-looking appropriateness repair (Ginzburg ex. 31 p. 287): swap the head of Pending with a corrected formclearMaxPending— drop the head (abandon current attempt)maxPendingaccessor —pending.head?
This section exercises a self-repair trace: speaker says "I saw the —" (an in-construction LocProp), realizes the wrong word, replaces with "I saw the manager" (the corrected form).
The mid-utterance LocProp — speaker has begun "I saw the".
Equations
- Phenomena.Dialogue.Studies.Ginzburg2012.midRepairLP = { phon := "I saw the", cat := "S", cont := "see(spkr, ?)" }
Instances For
The corrected LocProp once the speaker chooses "manager".
Equations
- Phenomena.Dialogue.Studies.Ginzburg2012.correctedManagerLP = { phon := "I saw the manager", cat := "S", cont := "see(spkr, manager)" }
Instances For
Initial TIS, no repair in flight.
Instances For
Push the in-construction LocProp onto Pending — it becomes MaxPending.
Equations
Instances For
Apply backwards-looking appropriateness repair (ex. 31 p. 287): swap the in-flight MaxPending with the corrected LocProp.
Equations
Instances For
After repair, MaxPending IS the corrected form.
After repair, the original midRepair LocProp is no longer at the head.
Substrate witness: the abandonment path drops MaxPending.
End-to-End: DialogueSign → LocProp → CCURs → DGB #
The full @cite{ginzburg-2012}-architecture pipeline:
- Lexical entry as
DialogueSign Cont(Ch. 5) - Project to
LocProp ContviatoLocProp(Ch. 6) - Integration via
integrateLocPropCCUR(§6.6–6.7) - Pipeline outcome integrates with DGB updates
We trace this for the worked example "Jo arrived" with two belief configurations (addressee knows / does not know who Jo is).
The DialogueSign for "Jo arrived" (composed conceptually from
Grammar.jo's dgb-params and a verb sign). For brevity we just construct
the composed LocProp directly.
Equations
- One or more equations did not get rendered due to their size.
Instances For
With knowing-Jo beliefs: the integration pipeline grounds.
Without belief: the pipeline CRifies with the standard "Who do you mean by 'jo_ref'?" CR.
The exhaustivity claim: integration always either grounds or CRifies, never silently skips.
The Grammar.jo lexical entry's toLocProp itself has unresolved
cparams (it's a referential proper name).
Architectural Contrasts vs Pre-2012 Frameworks #
Per the project's chronological-dependency rule, this study engages pre-2012 siblings here:
- @cite{stalnaker-1978}/@cite{stalnaker-2002} — single shared CG
- @cite{farkas-bruce-2010} — five-way dcS/dcL/cg/table/ps decomposition
- @cite{roberts-1996} — partition-based QUD-stack
- @cite{purver-ginzburg-2004} — q-params/dgb-params split
- @cite{ginzburg-sag-2000} — HPSG grammar foundation
Cross-framework contrasts with post-2012 siblings (Krifka 2015, Anderson 2021) live inside those studies, not here.
The contrasts below make the architectural disagreements Lean-checkable.
A simple two-world scenario for HasContextSet contrasts.
Instances For
Equations
- Phenomena.Dialogue.Studies.Ginzburg2012.instDecidableEqRainW x✝ y✝ = if h : x✝.ctorIdx = y✝.ctorIdx then isTrue ⋯ else isFalse ⋯
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
"It's raining" as a Set of worlds (only the rainy world).
Equations
Instances For
Equations
- Phenomena.Dialogue.Studies.Ginzburg2012.instDecidablePredRainWIsRaining w = id inferInstance
vs Stalnaker (single shared CG) #
@cite{stalnaker-1978}: the common ground is one shared object. After A asserts p, the (single) CG contains p eliminated of ¬p worlds.
@cite{ginzburg-2012}: each participant has their own DGB. After A asserts p, A's DGB has p in FACTS; B's does not. There is NO shared CG — only coupled DGBs synchronized via Accept.
The contrast as a Lean theorem: KOS predicts that two participants' DGBs (after A asserts p, before B accepts) project to different context sets — Stalnaker's framework cannot represent this divergence because it has only one CG.
Equations
- One or more equations did not get rendered due to their size.
Speaker DGB after asserting "raining".
Equations
Instances For
Addressee DGB before accepting (still empty).
Instances For
KOS-vs-Stalnaker architectural contrast: KOS's two DGBs project to
different ContextSets after one-sided assertion. Stalnaker's
framework has a single CG that cannot represent this divergence —
the contrast is structural, not a matter of degree.
The inequality holds at RainW.sunny: the speaker's CS at sunny
requires isRaining sunny = True (i.e. sunny = rainy), which is
False; the addressee's CS at sunny is vacuously True (empty universal
over an empty facts list).
vs Farkas-Bruce 2010 (five-way decomposition) #
@cite{farkas-bruce-2010}: discourse state has FIVE components (dcS, dcL, cg, table, ps). Assertion adds to speaker's dcS (private commitment), pushes an issue on the table; only acceptance moves content to the shared CG.
@cite{ginzburg-2012}: per-participant DGBs (no separate dcS/dcL/cg distinction inside one structure). Assertion goes directly to the speaker's own DGB.facts.
The architectures are both per-participant, but F&B treats it as three slates inside one "current discourse state" while KOS treats it as one slate per participant.
The same dialogue trace ("A asserts p") in both frameworks: F&B puts p in dcS (one of three slates within the shared discourse state); KOS puts p in A's DGB.facts (one of two coupled DGBs). The architectural shape differs even though predictions about "is p committed by A?" agree.
The deeper architectural disagreement: F&B can model retraction
(content moves dcS → cg → dcS, e.g. "I take that back"); KOS's facts
field is monotone (operations only add, never remove). The KOS API
literally has no removeFact operation — the type signature of every
DGB-update primitive in KOS/Basic.lean preserves the inclusion
dgb.facts ⊆ (op dgb).facts.
Witness: addFact p increases the facts list by one element.
F&B explicitly supports the inverse: addToCG then can be reversed
by re-introducing to dcS. The substrate-level disagreement: KOS's
type-level commitment to monotonicity.
vs Roberts 1996/2012 (partition-stack QUD) #
Both frameworks use a stack of questions. Roberts treats QUD entries
as partitions of worlds (Hamblin/Groenendijk-Stokhof style); KOS
treats QUD entries as InfoStrucs carrying the question + FECs.
Structural agreement: the push/pop semantics agree. We use the partition
substrate from Core/Question/Partition/QUD.lean to demonstrate.
Both Roberts QUD-stack semantics and KOS InfoStruc-stack semantics push and pop in the same way: the topmost question is the most recent.
vs Purver-Ginzburg 2004 (q-params / dgb-params split) #
@cite{purver-ginzburg-2004} ARGUE for the q-params / dgb-params split that @cite{ginzburg-2012} adopts (§8.5.1 ex. 101–103 p. 325–326): only dgb-params block grounding (trigger CRification); q-params travel with the sign but don't block, enabling the Reprise Content Hypothesis.
Our substrate LocProp carries this split as cparams (dgb-params)
and qcparams (q-params). The grounding pipeline only checks cparams.
The Purver-Ginzburg structural prediction (substrate-instantiated): a LocProp with q-params but no dgb-params still grounds.
vs Ginzburg-Sag 2000 (HPSG grammar foundation) #
@cite{ginzburg-2012} Ch. 5 §5.1 explicitly builds on @cite{ginzburg-sag-2000}'s
HPSG-formulated grammar. Our Grammar.lean integrates with HPSG via
DialogueSign.toSynsem (project to plain HPSG sign) and
DialogueSign.toLocProp (project to KOS-shaped LocProp).
This is structural agreement, not contrast — KOS's grammar is an extension of GS2000's. The HPSG foundations supplement what KOS needs; no architectural disagreement.
KOS's DialogueSign faithfully extends @cite{ginzburg-sag-2000}'s
HPSG Synsem: the projection toSynsem preserves head-features and
category, losing only dialogue features (dgbParams, qParams, questDom).
This is a structural-retract witness: KOS can be "forgotten down" to
plain HPSG.
vs Purver-Ginzburg 2004 — full RCH machinery #
The simple qparams_dont_block_grounding theorem above shows the
type-level prerequisite. The substantive Purver-Ginzburg claim is the
Reprise Content Hypothesis (RCH): the q-params record on a LocProp
licenses the queryable types observable in fragment-reprise data.
The substrate KOS/RepriseContent.lean provides the WeakRCH/StrongRCH
predicates. The PurverGinzburg2004 study file proves the q-params
predictor satisfies them. We re-export the structural connection here:
the LocProp design choice in KOS-2012 is what enables RCH satisfaction.