[2509.17183] LifeAlign: Lifelong Alignment for Large Language Models with Memory-Augmented Focalized Preference Optimization
About this article
Abstract page for arXiv paper 2509.17183: LifeAlign: Lifelong Alignment for Large Language Models with Memory-Augmented Focalized Preference Optimization
Computer Science > Computation and Language arXiv:2509.17183 (cs) [Submitted on 21 Sep 2025 (v1), last revised 7 Apr 2026 (this version, v2)] Title:LifeAlign: Lifelong Alignment for Large Language Models with Memory-Augmented Focalized Preference Optimization Authors:Junsong Li, Jie Zhou, Bihao Zhan, Yutao Yang, Qianjun Pan, Shilian Chen, Tianyu Huai, Xin Li, Qin Chen, Liang He View a PDF of the paper titled LifeAlign: Lifelong Alignment for Large Language Models with Memory-Augmented Focalized Preference Optimization, by Junsong Li and 9 other authors View PDF HTML (experimental) Abstract:Alignment plays a crucial role in Large Language Models (LLMs) in aligning with human preferences on a specific task/domain. Traditional alignment methods suffer from catastrophic forgetting, where models lose previously acquired knowledge when adapting to new preferences or domains. We introduce LifeAlign, a novel framework for lifelong alignment that enables LLMs to maintain consistent human preference alignment across sequential learning tasks without forgetting previously learned knowledge. Our approach consists of two key innovations. First, we propose a focalized preference optimization strategy that aligns LLMs with new preferences while preventing the erosion of knowledge acquired from previous tasks. Second, we develop a short-to-long memory consolidation mechanism that merges denoised short-term preference representations into stable long-term memory using intrinsic dimensional...