[2601.22925] BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models
About this article
Abstract page for arXiv paper 2601.22925: BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models
Computer Science > Information Retrieval arXiv:2601.22925 (cs) [Submitted on 30 Jan 2026 (v1), last revised 27 Apr 2026 (this version, v2)] Title:BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models Authors:Weiqin Yang, Bohao Wang, Zhenxiang Xu, Jiawei Chen, Shengjia Zhang, Jingbang Chen, Canghong Jin, Can Wang View a PDF of the paper titled BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models, by Weiqin Yang and 7 other authors View PDF HTML (experimental) Abstract:Recent years have seen a rapid surge in research leveraging Large Language Models (LLMs) for recommendation. These methods typically employ supervised fine-tuning (SFT) to adapt LLMs to recommendation scenarios, and utilize beam search during inference to efficiently retrieve $B$ top-ranked recommended items. However, we identify a critical training-inference inconsistency: while SFT optimizes the overall probability of positive items, it does not guarantee that such items will be retrieved by beam search even if they possess high overall probabilities. Due to the greedy pruning mechanism, beam search can prematurely discard a positive item once its prefix probability is insufficient. To address this inconsistency, we propose BEAR (Beam-SEarch-Aware Regularization), a novel fine-tuning objective that explicitly accounts for beam search behavior during training. Rather than directly simulating beam search for each instance during trainin...