[2603.23069] AuthorMix: Modular Authorship Style Transfer via Layer-wise Adapter Mixing
About this article
Abstract page for arXiv paper 2603.23069: AuthorMix: Modular Authorship Style Transfer via Layer-wise Adapter Mixing
Computer Science > Computation and Language arXiv:2603.23069 (cs) [Submitted on 24 Mar 2026] Title:AuthorMix: Modular Authorship Style Transfer via Layer-wise Adapter Mixing Authors:Sarubi Thillainathan, Ji-Ung Lee, Michael Sullivan, Alexander Koller View a PDF of the paper titled AuthorMix: Modular Authorship Style Transfer via Layer-wise Adapter Mixing, by Sarubi Thillainathan and 3 other authors View PDF Abstract:The task of authorship style transfer involves rewriting text in the style of a target author while preserving the meaning of the original text. Existing style transfer methods train a single model on large corpora to model all target styles at once: this high-cost approach offers limited flexibility for target-specific adaptation, and often sacrifices meaning preservation for style transfer. In this paper, we propose AuthorMix: a lightweight, modular, and interpretable style transfer framework. We train individual, style-specific LoRA adapters on a small set of high-resource authors, allowing the rapid training of specialized adaptation models for each new target via learned, layer-wise adapter mixing, using only a handful of target style training examples. AuthorMix outperforms existing, SoTA style-transfer baselines -- as well as GPT-5.1 -- for low-resource targets, achieving the highest overall score and substantially improving meaning preservation. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.230...