As Large Language Models (LLMs) are increasingly tasked with autonomous decision-making, understanding their behavior in strategic settings is crucial. We investigate the choices of various LLMs in the Ultimatum Game, a setting where human behavior notably deviates from theoretical rationality. We conduct experiments varying the stake size and the nature of the opponent (Human vs. AI) across both Proposer and Responder roles. Three key results emerge. First, LLM behavior is heterogeneous but predictable when conditioning on stake size and player types. Second, while some models approximate the rational benchmark and others mimic human social preferences, a distinct “altruistic” mode emerges where LLMs propose hyper-fair distributions (greater than 50%). Third, LLM Proposers forgo a large share of total payoff, and an even larger share when the Responder is human. These findings highlight the need for careful testing before deploying AI agents in economic settings.

More on this topic

BFI Working Paper·Mar 10, 2026

Work from Home and Fertility

Cevat Giray Aksoy, Jose Maria Barrero, Nicholas Bloom, Katelyn Cranney, Steven J. Davis, Mathias Dolls, and Pablo Zarate
Topics: COVID-19, Health care, Technology & Innovation
BFI Working Paper·Jan 21, 2026

FinTech and Customer Capital

Bianca He, Lauren Mostrom, and Amir Sufi
Topics: Financial Markets, Technology & Innovation
BFI Working Paper·Jan 15, 2026

Technology and Economic Development

Daron Acemoglu, Ufuk Akcigit, and Simon Johnson
Topics: Technology & Innovation