As Large Language Models (LLMs) are increasingly tasked with autonomous decision-making, understanding their behavior in strategic settings is crucial. We investigate the choices of various LLMs in the Ultimatum Game, a setting where human behavior notably deviates from theoretical rationality. We conduct experiments varying the stake size and the nature of the opponent (Human vs. AI) across both Proposer and Responder roles. Three key results emerge. First, LLM behavior is heterogeneous but predictable when conditioning on stake size and player types. Second, while some models approximate the rational benchmark and others mimic human social preferences, a distinct “altruistic” mode emerges where LLMs propose hyper-fair distributions (greater than 50%). Third, LLM Proposers forgo a large share of total payoff, and an even larger share when the Responder is human. These findings highlight the need for careful testing before deploying AI agents in economic settings.

More on this topic

BFI Working Paper·Mar 20, 2026

Firm Data on AI

Ivan Yotzov, Jose Maria Barrero, Nicholas Bloom, Philip Bunn, Steven J. Davis, Kevin Foster, Aaron Jalca, Brent Meyer, Paul Mizen, Michael A. Navarrete, Pawel Smietanka, Gregory Thwaites, and Ben Zhe Wang
Topics: Technology & Innovation
BFI Working Paper·Mar 16, 2026

Attention (And Money) Is All You Need: Why Universities Are Struggling to Keep AI Talent

Ufuk Akcigit, Craig A. Chikis, Emin Dinlersoz, and Nathan Goldschlag
Topics: Higher Education & Workforce Training, Technology & Innovation
BFI Working Paper·Mar 10, 2026

Work from Home and Fertility

Cevat Giray Aksoy, Jose Maria Barrero, Nicholas Bloom, Katelyn Cranney, Steven J. Davis, Mathias Dolls, and Pablo Zarate
Topics: COVID-19, Health care, Technology & Innovation