[2602.22259] Orthogonal Weight Modification Enhances Learning Scalability and Convergence Efficiency without Gradient Backpropagation

[2602.22259] Orthogonal Weight Modification Enhances Learning Scalability and Convergence Efficiency without Gradient Backpropagation

arXiv - Machine Learning 3 min read Article

Summary

The paper presents LOCO, a novel weight modification method that enhances learning scalability and convergence efficiency without relying on backpropagation, addressing critical challenges in neuromorphic systems.

Why It Matters

As the demand for efficient learning algorithms grows, especially in neuromorphic computing, LOCO offers a promising alternative to traditional backpropagation methods. Its low time complexity and improved performance could significantly impact real-time and lifelong learning applications.

Key Takeaways

  • LOCO method improves convergence efficiency and scalability in neural networks.
  • It operates with O(1) parallel time complexity for weight updates, outperforming traditional backpropagation.
  • The method shows strong continual learning capabilities and enhanced task performance.

Computer Science > Machine Learning arXiv:2602.22259 (cs) [Submitted on 25 Feb 2026] Title:Orthogonal Weight Modification Enhances Learning Scalability and Convergence Efficiency without Gradient Backpropagation Authors:Guoqing Ma, Shan Yu View a PDF of the paper titled Orthogonal Weight Modification Enhances Learning Scalability and Convergence Efficiency without Gradient Backpropagation, by Guoqing Ma and 1 other authors View PDF HTML (experimental) Abstract:Recognizing the substantial computational cost of backpropagation (BP), non-BP methods have emerged as attractive alternatives for efficient learning on emerging neuromorphic systems. However, existing non-BP approaches still face critical challenges in efficiency and scalability. Inspired by neural representations and dynamic mechanisms in the brain, we propose a perturbation-based approach called LOw-rank Cluster Orthogonal (LOCO) weight modification. We find that low-rank is an inherent property of perturbation-based algorithms. Under this condition, the orthogonality constraint limits the variance of the node perturbation (NP) gradient estimates and enhances the convergence efficiency. Through extensive evaluations on multiple datasets, LOCO demonstrates the capability to locally train the deepest spiking neural networks to date (more than 10 layers), while exhibiting strong continual learning ability, improved convergence efficiency, and better task performance compared to other brain-inspired non-BP algorithms....

Related Articles

Top 10 AI certifications and courses for 2026
Ai Startups

Top 10 AI certifications and courses for 2026

This article reviews the top 10 AI certifications and courses for 2026, highlighting their significance in a rapidly evolving field and t...

AI Events · 15 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[P] MCGrad: fix calibration of your ML model in subgroups

Hi r/MachineLearning, We’re open-sourcing MCGrad, a Python package for multicalibration–developed and deployed in production at Meta. Thi...

Reddit - Machine Learning · 1 min ·
Machine Learning

Ml project user give dataset and I give best model [D] [P]

Tl,dr : suggest me a solution to create a ai ml project where user will give his dataset as input and the project should give best model ...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime