[2603.03915] Rethinking Role-Playing Evaluation: Anonymous Benchmarking and a Systematic Study of Personality Effects
About this article
Abstract page for arXiv paper 2603.03915: Rethinking Role-Playing Evaluation: Anonymous Benchmarking and a Systematic Study of Personality Effects
Computer Science > Computation and Language arXiv:2603.03915 (cs) [Submitted on 4 Mar 2026] Title:Rethinking Role-Playing Evaluation: Anonymous Benchmarking and a Systematic Study of Personality Effects Authors:Ji-Lun Peng, Yun-Nung Chen View a PDF of the paper titled Rethinking Role-Playing Evaluation: Anonymous Benchmarking and a Systematic Study of Personality Effects, by Ji-Lun Peng and 1 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have demonstrated significant potential in developing Role-Playing Agents (RPAs). However, current research primarily evaluates RPAs using famous fictional characters, allowing models to rely on memory associated with character names. This dependency creates a bias that limits the generalization of RPAs to unseen personas. To address this issue, we propose an anonymous evaluation method. Experiments across multiple benchmarks reveal that anonymization significantly degrades role-playing performance, confirming that name exposure carries implicit information. Furthermore, we investigate personality augmentation to enhance role fidelity under anonymous setting. We systematically compare the efficacy of personality traits derived from human annotations versus those self-generated by the model. Our results demonstrate that incorporating personality information consistently improves RPA performance. Crucially, self-generated personalities achieve performance comparable to human-annotated ones. This work establ...