[2602.18131] Learning Long-Range Dependencies with Temporal Predictive Coding

[2602.18131] Learning Long-Range Dependencies with Temporal Predictive Coding

arXiv - Machine Learning 3 min read Article

Summary

The paper presents a novel method combining Temporal Predictive Coding with Real-Time Recurrent Learning to effectively learn long-range dependencies in neural networks, achieving competitive performance with reduced energy consumption.

Why It Matters

This research addresses the challenges in training recurrent neural networks for tasks requiring long-range temporal dependencies, offering a more energy-efficient alternative to traditional methods like Backpropagation Through Time. It has significant implications for the development of sustainable AI systems and enhances the capabilities of machine learning models in complex tasks.

Key Takeaways

  • Introduces a method that combines Temporal Predictive Coding with Real-Time Recurrent Learning.
  • Achieves performance comparable to Backpropagation Through Time while being more energy-efficient.
  • Demonstrates effectiveness on both synthetic benchmarks and real-world tasks, including machine translation.
  • Paves the way for more sustainable AI systems by utilizing local and parallelizable operations.
  • Highlights the potential for broader applications of Temporal Predictive Coding in machine learning.

Computer Science > Machine Learning arXiv:2602.18131 (cs) [Submitted on 20 Feb 2026] Title:Learning Long-Range Dependencies with Temporal Predictive Coding Authors:Tom Potter, Oliver Rhodes View a PDF of the paper titled Learning Long-Range Dependencies with Temporal Predictive Coding, by Tom Potter and 1 other authors View PDF HTML (experimental) Abstract:Predictive Coding (PC) is a biologically-inspired learning framework characterised by local, parallelisable operations, properties that enable energy-efficient implementation on neuromorphic hardware. Despite this, extending PC effectively to recurrent neural networks (RNNs) has been challenging, particularly for tasks involving long-range temporal dependencies. Backpropagation Through Time (BPTT) remains the dominant method for training RNNs, but its non-local computation, lack of spatial parallelism, and requirement to store extensive activation histories results in significant energy consumption. This work introduces a novel method combining Temporal Predictive Coding (tPC) with approximate Real-Time Recurrent Learning (RTRL), enabling effective spatio-temporal credit assignment. Results indicate that the proposed method can closely match the performance of BPTT on both synthetic benchmarks and real-world tasks. On a challenging machine translation task, with a 15-million parameter model, the proposed method achieves a test perplexity of 7.62 (vs. 7.49 for BPTT), marking one of the first applications of tPC to tasks o...

Related Articles

Machine Learning

[P] Fused MoE Dispatch in Pure Triton: Beating CUDA-Optimized Megablocks at Inference Batch Sizes

I built a fused MoE dispatch kernel in pure Triton that handles the full forward pass for Mixture-of-Experts models. No CUDA, no vendor-s...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML Rebuttal Question

I am currently working on my response on the rebuttal acknowledgments for ICML and I doubting how to handle the strawman argument of that...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ML researcher looking to switch to a product company.

Hey, I am an AI researcher currently working in a deep tech company as a data scientist. Prior to this, I was doing my PhD. My current ro...

Reddit - Machine Learning · 1 min ·
Machine Learning

Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]

Hey guys, I’m the same creator of Netryx V2, the geolocation tool. I’ve been working on something new called COGNEX. It learns how a pers...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime