[2603.01973] CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in Production
About this article
Abstract page for arXiv paper 2603.01973: CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in Production
Computer Science > Computation and Language arXiv:2603.01973 (cs) [Submitted on 2 Mar 2026] Title:CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in Production Authors:Yixin Nie, Lin Guan, Zhongyao Ma, Anchit Gupta, Yipin Zhou, Xiao Li, Zhengping Zhou, Raymond Zeng, Gelin Zhou, Shigan Chu, Ajay Thampi, Wancen Mu, Nathan Shuster, Ketong Wang, Lin Chen, Jason Brewer, Derek Hao Hu, Alexander McCauley, Jason Weston, Sem Park, Na Zhang, Kevin Tang View a PDF of the paper titled CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in Production, by Yixin Nie and 21 other authors View PDF HTML (experimental) Abstract:This report presents CharacterFlywheel, an iterative flywheel process for improving large language models (LLMs) in production social chat applications across Instagram, WhatsApp, and Messenger. Starting from LLaMA 3.1, we refined models across 15 generations using data from both internal and external real-user traffic. Through continuous deployments from July 2024 to April 2025, we conducted controlled 7-day A/B tests showing consistent engagement improvements: 7 of 8 newly deployed models demonstrated positive lift over the baseline, with the strongest performers achieving up to 8.8% improvement in engagement breadth and 19.4% in engagement depth. We also observed substantial gains in steerability, with instruction following increasing from 59.2% to 84.8% and instruction violations decreasing from 26.6% to ...