[2604.08757] Cards Against LLMs: Benchmarking Humor Alignment in Large Language Models
About this article
Abstract page for arXiv paper 2604.08757: Cards Against LLMs: Benchmarking Humor Alignment in Large Language Models
Computer Science > Computation and Language arXiv:2604.08757 (cs) [Submitted on 9 Apr 2026] Title:Cards Against LLMs: Benchmarking Humor Alignment in Large Language Models Authors:Yousra Fettach, Guillaume Bied, Hannu Toivonen, Tijl De Bie View a PDF of the paper titled Cards Against LLMs: Benchmarking Humor Alignment in Large Language Models, by Yousra Fettach and 3 other authors View PDF HTML (experimental) Abstract:Humor is one of the most culturally embedded and socially significant dimensions of human communication, yet it remains largely unexplored as a dimension of Large Language Model (LLM) alignment. In this study, five frontier language models play the same Cards Against Humanity games (CAH) as human players. The models select the funniest response from a slate of ten candidate cards across 9,894 rounds. While all models exceed the random baseline, alignment with human preference remains modest. More striking is that models agree with each other substantially more often than they agree with humans. We show that this preference is partly explained by systematic position biases and content preferences, raising the question whether LLM humor judgment reflects genuine preference or structural artifacts of inference and alignment. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2604.08757 [cs.CL] (or arXiv:2604.08757v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2604.08757 Focus to learn more arXiv-issued DOI...