[2602.23663] Disentangled Mode-Specific Representations for Tensor Time Series via Contrastive Learning

[2602.23663] Disentangled Mode-Specific Representations for Tensor Time Series via Contrastive Learning

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2602.23663: Disentangled Mode-Specific Representations for Tensor Time Series via Contrastive Learning

Computer Science > Machine Learning arXiv:2602.23663 (cs) [Submitted on 27 Feb 2026] Title:Disentangled Mode-Specific Representations for Tensor Time Series via Contrastive Learning Authors:Kohei Obata, Taichi Murayama, Zheng Chen, Yasuko Matsubara, Yasushi Sakurai View a PDF of the paper titled Disentangled Mode-Specific Representations for Tensor Time Series via Contrastive Learning, by Kohei Obata and 4 other authors View PDF HTML (experimental) Abstract:Multi-mode tensor time series (TTS) can be found in many domains, such as search engines and environmental monitoring systems. Learning representations of a TTS benefits various applications, but it is also challenging since the complexities inherent in the tensor hinder the realization of rich representations. In this paper, we propose a novel representation learning method designed specifically for TTS, namely MoST. Specifically, MoST uses a tensor slicing approach to reduce the complexity of the TTS structure and learns representations that can be disentangled into individual non-temporal modes. Each representation captures mode-specific features, which are the relationship between variables within the same mode, and mode-invariant features, which are in common in representations of different modes. We employ a contrastive learning framework to learn parameters; the loss function comprises two parts intended to learn representation in a mode-specific way and mode-invariant way, effectively exploiting disentangled rep...

Originally published on March 02, 2026. Curated by AI News.

Related Articles

[2603.13294] Real-World AI Evaluation: How FRAME Generates Systematic Evidence to Resolve the Decision-Maker's Dilemma
Machine Learning

[2603.13294] Real-World AI Evaluation: How FRAME Generates Systematic Evidence to Resolve the Decision-Maker's Dilemma

Abstract page for arXiv paper 2603.13294: Real-World AI Evaluation: How FRAME Generates Systematic Evidence to Resolve the Decision-Maker...

arXiv - AI · 4 min ·
[2603.12564] AgentDrift: Unsafe Recommendation Drift Under Tool Corruption Hidden by Ranking Metrics in LLM Agents
Llms

[2603.12564] AgentDrift: Unsafe Recommendation Drift Under Tool Corruption Hidden by Ranking Metrics in LLM Agents

Abstract page for arXiv paper 2603.12564: AgentDrift: Unsafe Recommendation Drift Under Tool Corruption Hidden by Ranking Metrics in LLM ...

arXiv - AI · 4 min ·
[2602.00665] Can Small Language Models Handle Context-Summarized Multi-Turn Customer-Service QA? A Synthetic Data-Driven Comparative Evaluation
Llms

[2602.00665] Can Small Language Models Handle Context-Summarized Multi-Turn Customer-Service QA? A Synthetic Data-Driven Comparative Evaluation

Abstract page for arXiv paper 2602.00665: Can Small Language Models Handle Context-Summarized Multi-Turn Customer-Service QA? A Synthetic...

arXiv - AI · 4 min ·
[2512.12812] Does Tone Change the Answer? Evaluating Prompt Politeness Effects on Modern LLMs: GPT, Gemini, and LLaMA
Llms

[2512.12812] Does Tone Change the Answer? Evaluating Prompt Politeness Effects on Modern LLMs: GPT, Gemini, and LLaMA

Abstract page for arXiv paper 2512.12812: Does Tone Change the Answer? Evaluating Prompt Politeness Effects on Modern LLMs: GPT, Gemini, ...

arXiv - AI · 4 min ·
More in Ai Startups: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime