Social Utility: Fehr-Schmidt Inequity Aversion #
@cite{fehr-schmidt-1999} @cite{houlihan-kleiman-weiner-hewitt-tenenbaum-saxe-2023}
@cite{fehr-schmidt-1999} model agents who care about fairness, not just material payoff. An agent's utility depends on the difference between their own payoff and others' payoffs:
U_i = v_i − α · max(0, v_j − v_i) − β · max(0, v_i − v_j)
where α ≥ 0 captures disadvantageous inequity aversion (DIA — disliking getting less than others) and β captures advantageous inequity aversion (AIA — disliking getting more than others).
@cite{houlihan-kleiman-weiner-hewitt-tenenbaum-saxe-2023} use this as the base utility in their inverse planning model of emotion prediction. Observers infer agents' α and β weights from observed choices, then use those inferred social preferences to compute emotion appraisals.
Connection to RSA #
In politeness models (@cite{yoon-etal-2020}), the "social utility" term is a primitive kindness weight φ. Fehr-Schmidt decomposes social utility into two structurally distinct components (AIA, DIA), each with its own behavioral signature. This decomposition predicts which emotions arise: DIA-weighted appraisals drive envy; AIA-weighted appraisals drive guilt.
Core Utility Function #
Fehr-Schmidt inequity aversion utility.
U = v_self − α · max(0, v_other − v_self) − β · max(0, v_self − v_other)
α(DIA): penalty for disadvantageous inequality (I got less)β(AIA): penalty for advantageous inequality (I got more)
Typically 0 ≤ β ≤ α: people dislike being behind more than being ahead.
Equations
- Core.fehrSchmidt vSelf vOther α β = vSelf - α * max 0 (vOther - vSelf) - β * max 0 (vSelf - vOther)
Instances For
Disadvantageous inequality: how much worse off I am than the other.
Equations
- Core.disadvantageousInequality vSelf vOther = max 0 (vOther - vSelf)
Instances For
Advantageous inequality: how much better off I am than the other.
Equations
- Core.advantageousInequality vSelf vOther = max 0 (vSelf - vOther)
Instances For
Fehr-Schmidt decomposes into three additive terms.
Special Cases #
A purely selfish agent (α = β = 0) maximizes own payoff.
Equal payoffs produce no inequity penalty.
When payoffs are equal, DI = 0.
When payoffs are equal, AI = 0.
DI and AI are complementary: exactly one is positive.
Monotonicity #
Higher α increases DIA penalty (weakly).
Higher β increases AIA penalty (weakly).
Value Function #
@cite{houlihan-kleiman-weiner-hewitt-tenenbaum-saxe-2023} apply a value function ν to raw monetary payoffs to capture diminishing marginal utility. For their purposes, ν is a sign-adjusted logarithm. We keep the utility function generic over any monotone transform.
Composed Fehr-Schmidt: apply a value function to raw payoffs before computing inequity penalties.
Equations
- Core.fehrSchmidtV ν vSelf vOther α β = Core.fehrSchmidt (ν vSelf) (ν vOther) α β
Instances For
When ν is the identity, fehrSchmidtV reduces to fehrSchmidt.