Halpert & Hammerly 2026: Reconciling Animacy and Noun Class in Bantu #
@cite{halpert-hammerly-2026}
Halpert, Claire & Hammerly, Christopher. 2026. Reconciling animacy and noun class in Bantu. Glossa: a journal of general linguistics 11(1). 1--25.
Core claims #
Core Noun Class Hypothesis (19): All Bantu nouns are underlyingly specified for one of three core noun classes based on features [±Animate] and [±Human] from @cite{hammerly-2023}'s containment hierarchy: HUMAN [+Anim, +Hum] ≈ cl 1/2, NON-HUMAN ANIMATE [+Anim, −Hum] ≈ cl 9/10, INANIMATE [−Anim, −Hum] ≈ cl 7/8.
Containment unification: Person features [±Author, ±Participant] and animacy features [±Human, ±Animate] are part of the same containment hierarchy (3)–(4): [Auth] ⊂ [Part] ⊂ [Hum] ⊂ [Anim].
Final vowels (22): Core noun class is morphophonologically encoded by the nominalizing final vowel: n[+Anim, +Hum] ↔ -i, n[_, −Hum] ↔ -o.
nP stacking (26): A secondary n wraps the core nP, creating mismatches between the class prefix (secondary) and the core noun class (accessible to agreement).
Probe articulation (29): Cross-Bantu variation in animacy effects follows from probe sensitivity — flat φ-probes target n_secondary (Zulu), while probes relativized to [+Animate] search past n_secondary to n_core (Swahili).
Empirical phenomena #
- Alternative agreement / anti-agreement in Lubukusu (8)–(9)
- Animacy override in Chiyao (11)–(12), Swahili (13)
- Object doubling conditioned by animacy in Nyaturu (14)
- Agreement convergence with conjoined nouns in Xhosa (18)
Relationship to Carstens 2026 #
@cite{halpert-hammerly-2026} and @cite{carstens-2026} converge on nP
stacking and the three-way core noun class split, but differ on the
theoretical grounding: H&H derive core classes from @cite{hammerly-2023}'s
containment-type features [±Animate, ±Human] and propose final vowels
as their morphophonological locus. Carstens rejects a grammaticalized
animacy hierarchy (fn. 12). The AnimacyFeatures type in Bantu/Params.lean
formalizes H&H's feature system; SemanticCore (shared with Carstens)
is derived from it via AnimacyFeatures.toCoreClass.
Integration #
AnimacyFeaturesinstantiatesFeatures.PhiFeatures(samePrivativePairas person features), connecting person and animacy containmentAnimacyFeatures.toAnimacyLevelbridges to theFeatures.Prominencehierarchy used throughout the codebaseimpossible_human_inanimate_without_animalproves the conflation impossibility predicted by containmentAnimacyFeatures.toGenderFeaturebridges to @cite{kramer-2015}'sGenderFeaturetype on the n-head- Cross-references:
Carstens2026.lean(convergence data),Kramer2020.lean(n-head typology)
The full prominence hierarchy from @cite{hammerly-2023} (3): [Auth] ⊂ [Part] ⊂ [Hum] ⊂ [Anim] ⊂ [Agent] ⊂ [Indiv] ⊂ φ.
This structure encodes the four innermost features. Person features [±Author, ±Participant] are nested inside animacy features [±Human, ±Animate].
- isAnimate : Bool
- isHuman : Bool
- hasParticipant : Bool
- hasAuthor : Bool
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
Equations
- One or more equations did not get rendered due to their size.
Instances For
Well-formedness enforces the full containment chain: [+Author] → [+Participant] → [+Human] → [+Animate].
Equations
- pf.wellFormed = ((!pf.hasAuthor || pf.hasParticipant) && (!pf.hasParticipant || pf.isHuman) && (!pf.isHuman || pf.isAnimate))
Instances For
Extract the person projection.
Equations
- pf.personFeatures = { hasParticipant := pf.hasParticipant, hasAuthor := pf.hasAuthor }
Instances For
Extract the animacy projection.
Equations
- pf.animacyFeatures = { isAnimate := pf.isAnimate, isHuman := pf.isHuman }
Instances For
The containment chain predicts: speech-act participants are human. [+Participant] → [+Human] (@cite{halpert-hammerly-2026} (3)–(4)).
The containment chain predicts: speech-act participants are animate. [+Participant] → [+Human] → [+Animate].
First person is necessarily human and animate.
Person and animacy features share the PrivativePair structure.
Both Features.Person.Features and AnimacyFeatures are PhiFeatures
instances — the same three-cell, no-four-way-distinction architecture.
This is not a coincidence: they are fragments of the same containment
hierarchy (@cite{hammerly-2023}).
The Core Noun Class Hypothesis (19): the three well-formed feature combinations map exactly to the three core classes.
The feature decomposition of Xhosa's three interpretable genders matches H&H's core noun class features.
Xhosa's uninterpretable genders have no core feature decomposition.
The three core noun classes match the three animacy levels used throughout the codebase (bridging H&H's features to differential argument marking, Corbett/Smith-Stark scales, etc.).
Lubukusu alternative agreement (8)–(9): local persons and class 1 nouns all trigger the same AA morpheme o- under A-bar extraction, while other classes (e.g. class 7) retain their standard SM.
This follows from containment: local persons have [+Participant], which entails [+Human] via the prominence hierarchy. Class 1 nouns have [+Human] directly. A probe targeting [+Human] treats them identically. Class 7 lacks [+Human] and so uses a different (unchanged) marker.
- a : LubukusuSM
- o : LubukusuSM
- sy : LubukusuSM
Instances For
Equations
- HalpertHammerly2026.instDecidableEqLubukusuSM x✝ y✝ = if h : x✝.ctorIdx = y✝.ctorIdx then isTrue ⋯ else isFalse ⋯
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
Subject marker pair: declarative context vs A-bar extraction.
- declarative : LubukusuSM
- extraction : LubukusuSM
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
Equations
- HalpertHammerly2026.lubukusu_class1 = { declarative := HalpertHammerly2026.LubukusuSM.a, extraction := HalpertHammerly2026.LubukusuSM.o }
Instances For
Equations
- HalpertHammerly2026.lubukusu_local_person = { declarative := HalpertHammerly2026.LubukusuSM.a, extraction := HalpertHammerly2026.LubukusuSM.o }
Instances For
Equations
- HalpertHammerly2026.lubukusu_class7 = { declarative := HalpertHammerly2026.LubukusuSM.sy, extraction := HalpertHammerly2026.LubukusuSM.sy }
Instances For
Class 1 and local persons share the AA pattern.
Class 7 does NOT show alternative agreement (lacks [+Human]).
Derivation: AA collapses class 1 and local persons BECAUSE both
project [+Human] via PhiFeatures. Local persons have [+Participant],
which entails [+Human] (person features are a subset of the animacy
hierarchy). Class 1 has [+Human] directly. Both yield the same
PrivativePair outer value.
This theorem shows the structural basis: well-formed participants must have [+Human], the same outer feature as class 1.
Probe sensitivity determines whether agreement tracks n_core or n_secondary (@cite{halpert-hammerly-2026} (29)).
- flat: the probe targets the closest φ-features (n_secondary). Agreement always reflects the visible noun class.
- relativized: the probe bears [+Animate] and searches past n_secondary to find [+Animate] on n_core. Agreement reflects core noun class for animate nouns.
This is parameterized per grammatical function: a language may have a flat subject probe but a relativized object probe (Nyaturu).
- flat : ProbeSensitivity
- relativized : ProbeSensitivity
Instances For
Equations
- HalpertHammerly2026.instDecidableEqProbeSensitivity x✝ y✝ = if h : x✝.ctorIdx = y✝.ctorIdx then isTrue ⋯ else isFalse ⋯
Equations
Equations
- One or more equations did not get rendered due to their size.
Instances For
Which agreement class surfaces for a given probe and nP stack.
Flat probes always return the visible class; relativized probes
return the core class when hasAnimateCore is true.
Equations
- HalpertHammerly2026.agreementClass HalpertHammerly2026.ProbeSensitivity.flat stack = stack.visibleClass
- HalpertHammerly2026.agreementClass HalpertHammerly2026.ProbeSensitivity.relativized stack = if stack.hasAnimateCore = true then stack.coreClass else stack.visibleClass
Instances For
Zulu (flat probe): a [human] noun in class 3 gets class 3 agreement.
Swahili (relativized probe): a [human] noun in class 7 gets class 1
agreement — animacy override. This is NOT separately stipulated;
it FOLLOWS from agreementClass .relativized + hasAnimateCore.
Swahili (relativized probe): an [animal] noun in class 7 ALSO gets core class agreement — animal override (GENERIC ANIMATE).
Inanimate nouns are unaffected by relativized probes — probe finds no [+Animate] and falls back to visible class.
Uninterpretable genders are always tracked by visible class regardless of probe type.
Animacy override is a consequence of probe articulation, not a separate parameter. When the core is [+Animate], a relativized probe returns the core class. When it isn't, the probe falls back to the visible class.
The default plural agreement class for conjoined singulars is
determined by the core noun class, not the visible class.
Human conjuncts → SM2, non-human conjuncts → SM8/10.
This follows from SemanticCore.defaultPluralClass.
Xhosa convergence (Table 18): human nouns in ANY class converge to SM2 because all human nouns share core noun class [+Animate, +Human].
Xhosa convergence (Table 18): inanimate nouns in any class converge to SM8 because they share core noun class [−Animate, −Human].
Table 18 key insight: Classes 1, 7, 9 show EXPECTED agreement
(their own plural class), not convergence. This is because these
are the canonical classes for the three core noun classes — their
visible class IS the core class (isCanonical = true).
Classes 3, 5 are non-canonical for all cores, so nouns in those classes show convergence to the core default instead.
Non-canonical nouns converge to core default, not to their visible class's plural. E.g. a [human] noun in class 3 converges to SM2 (core default for human), not SM4 (plural of class 3).
The SM8/SM10 syncretism in Xhosa (both use zi-) follows from the fact that classes 8 and 10 share [−Human] — they differ only in [±Animate], which Xhosa's convergence is insensitive to in this context.
Object doubling in Nyaturu (14): animate objects allow (or require)
doubling with an object marker on the verb; inanimate objects
disallow it. This uses the SAME predicate as animacy override —
hasAnimateCore — applied to the object probe, confirming that
both phenomena are instances of [+Animate]-relativized probing.
Equations
- HalpertHammerly2026.nyaturu_om_allowed stack = stack.hasAnimateCore
Instances For
Bridge from H&H's animacy features to @cite{kramer-2015}'s
GenderFeature on the categorizing head n.
H&H's AnimacyFeatures encode core noun class via [±Animate, ±Human].
Kramer's framework uses GenderDimension.anim with interpretability.
The bridge maps:
- [+Animate] (human or animal) → i[+ANIM] (interpretable animate)
- [−Animate] (inanimate) → i[−ANIM] (interpretable inanimate)
Both systems agree that the n-head bears gender features and that interpretability distinguishes natural from arbitrary gender.
Equations
- One or more equations did not get rendered due to their size.
Instances For
All core noun class features yield interpretable gender on n. This matches @cite{kramer-2015}'s prediction that natural gender is always interpretable, and connects H&H's claim that core noun class lives on n to Kramer's n-head architecture.
H&H's GenderStatus.uninterpretable corresponds to Kramer's
arbitrary (u-feature) gender — both encode nouns whose gender
is invisible at LF.