[2603.24742] Trust as Monitoring: Evolutionary Dynamics of User Trust and AI Developer Behaviour
About this article
Abstract page for arXiv paper 2603.24742: Trust as Monitoring: Evolutionary Dynamics of User Trust and AI Developer Behaviour
Computer Science > Artificial Intelligence arXiv:2603.24742 (cs) [Submitted on 25 Mar 2026] Title:Trust as Monitoring: Evolutionary Dynamics of User Trust and AI Developer Behaviour Authors:Adeela Bashir, Zhao Song, Ndidi Bianca Ogbo, Nataliya Balabanova, Martin Smit, Chin-wing Leung, Paolo Bova, Manuel Chica Serrano, Dhanushka Dissanayake, Manh Hong Duong, Elias Fernandez Domingos, Nikita Huber-Kralj, Marcus Krellner, Andrew Powell, Stefan Sarkadi, Fernando P. Santos, Zia Ush Shamszaman, Chaimaa Tarzi, Paolo Turrini, Grace Ibukunoluwa Ufeoshi, Victor A. Vargas-Perez, Alessandro Di Stefano, Simon T. Powers, The Anh Han View a PDF of the paper titled Trust as Monitoring: Evolutionary Dynamics of User Trust and AI Developer Behaviour, by Adeela Bashir and 23 other authors View PDF HTML (experimental) Abstract:AI safety is an increasingly urgent concern as the capabilities and adoption of AI systems grow. Existing evolutionary models of AI governance have primarily examined incentives for safe development and effective regulation, typically representing users' trust as a one-shot adoption choice rather than as a dynamic, evolving process shaped by repeated interactions. We instead model trust as reduced monitoring in a repeated, asymmetric interaction between users and AI developers, where checking AI behaviour is costly. Using evolutionary game theory, we study how user trust strategies and developer choices between safe (compliant) and unsafe (non-compliant) AI co-evolve un...