[2605.00224] TUR-DPO: Topology- and Uncertainty-Aware Direct Preference Optimization
About this article
Abstract page for arXiv paper 2605.00224: TUR-DPO: Topology- and Uncertainty-Aware Direct Preference Optimization
Computer Science > Artificial Intelligence arXiv:2605.00224 (cs) [Submitted on 30 Apr 2026] Title:TUR-DPO: Topology- and Uncertainty-Aware Direct Preference Optimization Authors:Abdulhady Abas Abdullah, Fatemeh Daneshfar, Seyedali Mirjalili, Mourad Oussalah View a PDF of the paper titled TUR-DPO: Topology- and Uncertainty-Aware Direct Preference Optimization, by Abdulhady Abas Abdullah and 3 other authors View PDF HTML (experimental) Abstract:Aligning large language models (LLMs) with human preferences is commonly done via reinforcement learning from human feedback (RLHF) with Proximal Policy Optimization (PPO) or, more simply, via Direct Preference Optimization (DPO). While DPO is stable and RL-free, it treats preferences as flat winner vs. loser signals and is sensitive to noisy or brittle preferences arising from fragile chains of thought. We propose TUR-DPO, a topology- and uncertainty-aware variant of DPO that rewards how answers are derived, not only what they say, by eliciting lightweight reasoning topologies and combining semantic faithfulness, utility, and topology quality into a calibrated uncertainty signal. A small learnable reward is factorized over these signals and incorporated into an uncertainty-weighted DPO objective that remains RL-free and relies only on a fixed or moving reference policy. Empirically, across open 7-8B models and benchmarks spanning mathematical reasoning, factual question answering, summarization, and helpful/harmless dialogue, TUR-DPO...