[2509.23115] RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility

[2509.23115] RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility

arXiv - Machine Learning 4 min read Article

Summary

The paper presents RHYTHM, a framework utilizing hierarchical temporal tokenization to enhance human mobility predictions by leveraging large language models for improved accuracy and efficiency.

Why It Matters

Understanding human mobility is crucial for various applications, including urban planning and transportation. RHYTHM's innovative approach addresses long-range dependencies and periodic behaviors, offering a significant advancement in predictive modeling within this domain.

Key Takeaways

  • RHYTHM employs hierarchical temporal tokenization to improve human mobility predictions.
  • The framework uses large language models as spatio-temporal predictors.
  • It achieves a 2.4% improvement in accuracy and reduces training time by 24.6%.
  • The model captures both daily and weekly dependencies effectively.
  • Code for RHYTHM is publicly available, promoting transparency and collaboration.

Computer Science > Machine Learning arXiv:2509.23115 (cs) [Submitted on 27 Sep 2025 (v1), last revised 23 Feb 2026 (this version, v3)] Title:RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility Authors:Haoyu He, Haozheng Luo, Yan Chen, Qi R. Wang View a PDF of the paper titled RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility, by Haoyu He and 3 other authors View PDF HTML (experimental) Abstract:Predicting human mobility is inherently challenging due to complex long-range dependencies and multi-scale periodic behaviors. To address this, we introduce RHYTHM (Reasoning with Hierarchical Temporal Tokenization for Human Mobility), a unified framework that leverages large language models (LLMs) as general-purpose spatio-temporal predictors and trajectory reasoners. Methodologically, RHYTHM employs temporal tokenization to partition each trajectory into daily segments and encode them as discrete tokens with hierarchical attention that captures both daily and weekly dependencies, thereby quadratically reducing the sequence length while preserving cyclical information. Additionally, we enrich token representations by adding pre-computed prompt embeddings for trajectory segments and prediction targets via a frozen LLM, and feeding these combined embeddings back into the LLM backbone to capture complex interdependencies. Computationally, RHYTHM keeps the pretrained LLM backbone frozen, yielding faster training and lower memory usage...

Related Articles

Llms

"Oops! ChatGPT is Temporarily Unavailable!": A Diary Study on Knowledge Workers' Experiences of LLM Withdrawal

submitted by /u/Special-Steel [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

I built a Star Trek LCARS terminal that reads your entire AI coding setup

Side project that got out of hand. It's a dashboard for Claude Code that scans your ~/.claude/ directory and renders everything as a TNG ...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] Is autoresearch really better than classic hyperparameter tuning?

We did experiments comparing Optuna & autoresearch. Autoresearch converges faster, is more cost-efficient, and even generalizes bette...

Reddit - Machine Learning · 1 min ·
Llms

Claude Source Code?

Has anyone been able to successfully download the leaked source code yet? I've not been able to find it. If anyone has, please reach out....

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime