[2507.01335] LEDOM: Reverse Language Model
About this article
Abstract page for arXiv paper 2507.01335: LEDOM: Reverse Language Model
Computer Science > Computation and Language arXiv:2507.01335 (cs) [Submitted on 2 Jul 2025 (v1), last revised 3 Mar 2026 (this version, v3)] Title:LEDOM: Reverse Language Model Authors:Xunjian Yin, Sitao Cheng, Yuxi Xie, Xinyu Hu, Li Lin, Xinyi Wang, Liangming Pan, William Yang Wang, Xiaojun Wan View a PDF of the paper titled LEDOM: Reverse Language Model, by Xunjian Yin and 8 other authors View PDF HTML (experimental) Abstract:Autoregressive language models are trained exclusively left-to-right. We explore the complementary factorization, training right-to-left at scale, and ask what reasoning patterns emerge when a model conditions on future context to predict the past. We train LEDOM, an open-source purely reverse autoregressive language model (2B/7B parameters, 435B tokens), and find it develops capabilities distinct from forward models, including abductive inference, question synthesis, and natural resolution of the reversal curse. We then explore one application of the reverse model: combining forward likelihood $P(y \mid x)$ with reverse posterior $P(x \mid y)$ through noisy channel duality. We propose Reverse Reward, which reranks forward outputs using reverse posterior estimates, and prove that bidirectional scoring penalizes hallucinated reasoning chains whose backward reconstruction degrades. Reverse Reward yields gains of up to 6.6\% on AIME 2024 and 15\% on AMC 2023 across multiple strong baselines. We release all models, code, and data here. Comments: Subject...