[2603.22335] Causal Direct Preference Optimization for Distributionally Robust Generative Recommendation
About this article
Abstract page for arXiv paper 2603.22335: Causal Direct Preference Optimization for Distributionally Robust Generative Recommendation
Computer Science > Information Retrieval arXiv:2603.22335 (cs) [Submitted on 21 Mar 2026] Title:Causal Direct Preference Optimization for Distributionally Robust Generative Recommendation Authors:Chu Zhao, Enneng Yang, Jianzhe Zhao, Guibing Guo View a PDF of the paper titled Causal Direct Preference Optimization for Distributionally Robust Generative Recommendation, by Chu Zhao and 3 other authors View PDF HTML (experimental) Abstract:Direct Preference Optimization (DPO) guides large language models (LLMs) to generate recommendations aligned with user historical behavior distributions by minimizing preference alignment loss. However, our systematic empirical research and theoretical analysis reveal that DPO tends to amplify spurious correlations caused by environmental confounders during the alignment process, significantly undermining the generalization capability of LLM-based generative recommendation methods in out of distribution (OOD) scenarios. To mitigate this issue, we propose CausalDPO, an extension of DPO that incorporates a causal invariance learning mechanism. This method introduces a backdoor adjustment strategy during the preference alignment phase to eliminate interference from environmental confounders, explicitly models the latent environmental distribution using a soft clustering approach, and enhances robust consistency across diverse environments through invariance constraints. Theoretical analysis demonstrates that CausalDPO can effectively capture use...