[2602.23012] Sequential Regression for Continuous Value Prediction using Residual Quantization

[2602.23012] Sequential Regression for Continuous Value Prediction using Residual Quantization

arXiv - Machine Learning 4 min read Article

Summary

This article presents a novel approach to continuous value prediction using a residual quantization framework, enhancing prediction accuracy in recommendation systems.

Why It Matters

Continuous value prediction is vital for industries like e-commerce and media streaming, where accurate estimations can significantly impact user engagement and revenue. This research addresses limitations in existing methods, offering a scalable solution that adapts to complex data distributions.

Key Takeaways

  • Proposes a residual quantization-based framework for continuous value prediction.
  • Addresses challenges of existing generative approaches in modeling complex data distributions.
  • Demonstrates improved prediction accuracy through recursive quantization code predictions.
  • Outperforms state-of-the-art methods in extensive evaluations.
  • Shows strong generalization across various continuous value prediction tasks.

Computer Science > Information Retrieval arXiv:2602.23012 (cs) [Submitted on 26 Feb 2026] Title:Sequential Regression for Continuous Value Prediction using Residual Quantization Authors:Runpeng Cui, Zhipeng Sun, Chi Lu, Peng Jiang View a PDF of the paper titled Sequential Regression for Continuous Value Prediction using Residual Quantization, by Runpeng Cui and 2 other authors View PDF HTML (experimental) Abstract:Continuous value prediction plays a crucial role in industrial-scale recommendation systems, including tasks such as predicting users' watch-time and estimating the gross merchandise value (GMV) in e-commerce transactions. However, it remains challenging due to the highly complex and long-tailed nature of the data distributions. Existing generative approaches rely on rigid parametric distribution assumptions, which fundamentally limits their performance when such assumptions misalign with real-world data. Overly simplified forms cannot adequately model real-world complexities, while more intricate assumptions often suffer from poor scalability and generalization. To address these challenges, we propose a residual quantization (RQ)-based sequence learning framework that represents target continuous values as a sum of ordered quantization codes, predicted recursively from coarse to fine granularity with diminishing quantization errors. We introduce a representation learning objective that aligns RQ code embedding space with the ordinal structure of target values, a...

Related Articles

Llms

[P] ClaudeFormer: Building a Transformer Out of Claudes — Collaboration Request

I'm looking to work with people interested in math, machine learning, or agentic coding, on creating a multi-agent framework to do fronti...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[D] Looking for definition of open-world ish learning problem

Hello! Recently I did a project where I initially had around 30 target classes. But at inference, the model had to be able to handle a lo...

Reddit - Machine Learning · 1 min ·
Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?
Machine Learning

Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?

Customer expectations across Africa are shifting faster than most organisations can track. A single inconsistent interaction can ignite a...

AI News - General · 8 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime