[2603.23485] Failure of contextual invariance in gender inference with large language models
About this article
Abstract page for arXiv paper 2603.23485: Failure of contextual invariance in gender inference with large language models
Computer Science > Computation and Language arXiv:2603.23485 (cs) [Submitted on 24 Mar 2026] Title:Failure of contextual invariance in gender inference with large language models Authors:Sagar Kumar, Ariel Flint, Luca Maria Aiello, Andrea Baronchelli View a PDF of the paper titled Failure of contextual invariance in gender inference with large language models, by Sagar Kumar and 3 other authors View PDF HTML (experimental) Abstract:Standard evaluation practices assume that large language model (LLM) outputs are stable under contextually equivalent formulations of a task. Here, we test this assumption in the setting of gender inference. Using a controlled pronoun selection task, we introduce minimal, theoretically uninformative discourse context and find that this induces large, systematic shifts in model outputs. Correlations with cultural gender stereotypes, present in decontextualized settings, weaken or disappear once context is introduced, while theoretically irrelevant features, such as the gender of a pronoun for an unrelated referent, become the most informative predictors of model behaviour. A Contextuality-by-Default analysis reveals that, in 19--52\% of cases across models, this dependence persists after accounting for all marginal effects of context on individual outputs and cannot be attributed to simple pronoun repetition. These findings show that LLM outputs violate contextual invariance even under near-identical syntactic formulations, with implications for ...