[2602.20558] From Logs to Language: Learning Optimal Verbalization for LLM-Based Recommendation in Production

[2602.20558] From Logs to Language: Learning Optimal Verbalization for LLM-Based Recommendation in Production

arXiv - AI 3 min read Article

Summary

This paper explores a data-centric framework for optimizing verbalization in LLM-based recommendation systems, enhancing recommendation accuracy by transforming user interaction logs into effective natural language inputs.

Why It Matters

As large language models become integral to recommendation systems, improving how structured data is verbalized can significantly enhance user experience and recommendation accuracy. This research addresses a critical gap in the current methodologies, offering a novel approach that leverages reinforcement learning to optimize input representation.

Key Takeaways

  • Proposes a framework that learns optimal verbalization for recommendations.
  • Utilizes reinforcement learning to improve the transformation of interaction logs into natural language.
  • Demonstrates a 93% improvement in recommendation accuracy over traditional methods.
  • Identifies effective strategies such as noise removal and user interest summarization.
  • Provides insights into context construction for LLM-based systems.

Computer Science > Artificial Intelligence arXiv:2602.20558 (cs) [Submitted on 24 Feb 2026] Title:From Logs to Language: Learning Optimal Verbalization for LLM-Based Recommendation in Production Authors:Yucheng Shi, Ying Li, Yu Wang, Yesu Feng, Arjun Rao, Rein Houthooft, Shradha Sehgal, Jin Wang, Hao Zhen, Ninghao Liu, Linas Baltrunas View a PDF of the paper titled From Logs to Language: Learning Optimal Verbalization for LLM-Based Recommendation in Production, by Yucheng Shi and 10 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are promising backbones for generative recommender systems, yet a key challenge remains underexplored: verbalization, i.e., converting structured user interaction logs into effective natural language inputs. Existing methods rely on rigid templates that simply concatenate fields, yielding suboptimal representations for recommendation. We propose a data-centric framework that learns verbalization for LLM-based recommendation. Using reinforcement learning, a verbalization agent transforms raw interaction histories into optimized textual contexts, with recommendation accuracy as the training signal. This agent learns to filter noise, incorporate relevant metadata, and reorganize information to improve downstream predictions. Experiments on a large-scale industrial streaming dataset show that learned verbalization delivers up to 93% relative improvement in discovery item recommendation accuracy over template-based base...

Related Articles

What is AI, how do apps like ChatGPT work and why are there concerns?
Llms

What is AI, how do apps like ChatGPT work and why are there concerns?

AI is transforming modern life, but some critics worry about its potential misuse and environmental impact.

AI News - General · 7 min ·
[2603.29957] Think Anywhere in Code Generation
Llms

[2603.29957] Think Anywhere in Code Generation

Abstract page for arXiv paper 2603.29957: Think Anywhere in Code Generation

arXiv - Machine Learning · 3 min ·
[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning
Llms

[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning

Abstract page for arXiv paper 2603.16880: NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectr...

arXiv - Machine Learning · 4 min ·
[2512.21106] Semantic Refinement with LLMs for Graph Representations
Llms

[2512.21106] Semantic Refinement with LLMs for Graph Representations

Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime