[2603.01024] SimAB: Simulating A/B Tests with Persona-Conditioned AI Agents for Rapid Design Evaluation
About this article
Abstract page for arXiv paper 2603.01024: SimAB: Simulating A/B Tests with Persona-Conditioned AI Agents for Rapid Design Evaluation
Computer Science > Human-Computer Interaction arXiv:2603.01024 (cs) [Submitted on 1 Mar 2026] Title:SimAB: Simulating A/B Tests with Persona-Conditioned AI Agents for Rapid Design Evaluation Authors:Tim Rieder, Marian Schneider, Mario Truss, Vitaly Tsaplin, Alina Rublea, Sinem Dere, Francisco Chicharro Sanz, Tobias Reiss, Mustafa Doga Dogan View a PDF of the paper titled SimAB: Simulating A/B Tests with Persona-Conditioned AI Agents for Rapid Design Evaluation, by Tim Rieder and 8 other authors View PDF HTML (experimental) Abstract:A/B testing is a standard method for validating design decisions, yet its reliance on real user traffic limits iteration speed and makes certain experiments impractical. We present SimAB, a system that reframes A/B testing as a fast, privacy-preserving simulation using persona-conditioned AI agents. Given design screenshots and a conversion goal, SimAB generates user personas, deploys them as agents that state their preference, aggregates results, and synthesizes rationales. Through a formative study with experimentation practitioners, we identified scenarios where traffic constraints hinder testing, including low-traffic pages, multi-variant comparisons, micro-optimizations, and privacy-sensitive contexts. Our design emphasizes speed, early feedback, actionable rationales, and audience specification. We evaluate SimAB against 47 historical A/B tests with known outcomes, achieving 67% overall accuracy, increasing to 83% for high-confidence cases...