[2602.22259] Orthogonal Weight Modification Enhances Learning Scalability and Convergence Efficiency without Gradient Backpropagation
Summary
The paper presents LOCO, a novel weight modification method that enhances learning scalability and convergence efficiency without relying on backpropagation, addressing critical challenges in neuromorphic systems.
Why It Matters
As the demand for efficient learning algorithms grows, especially in neuromorphic computing, LOCO offers a promising alternative to traditional backpropagation methods. Its low time complexity and improved performance could significantly impact real-time and lifelong learning applications.
Key Takeaways
- LOCO method improves convergence efficiency and scalability in neural networks.
- It operates with O(1) parallel time complexity for weight updates, outperforming traditional backpropagation.
- The method shows strong continual learning capabilities and enhanced task performance.
Computer Science > Machine Learning arXiv:2602.22259 (cs) [Submitted on 25 Feb 2026] Title:Orthogonal Weight Modification Enhances Learning Scalability and Convergence Efficiency without Gradient Backpropagation Authors:Guoqing Ma, Shan Yu View a PDF of the paper titled Orthogonal Weight Modification Enhances Learning Scalability and Convergence Efficiency without Gradient Backpropagation, by Guoqing Ma and 1 other authors View PDF HTML (experimental) Abstract:Recognizing the substantial computational cost of backpropagation (BP), non-BP methods have emerged as attractive alternatives for efficient learning on emerging neuromorphic systems. However, existing non-BP approaches still face critical challenges in efficiency and scalability. Inspired by neural representations and dynamic mechanisms in the brain, we propose a perturbation-based approach called LOw-rank Cluster Orthogonal (LOCO) weight modification. We find that low-rank is an inherent property of perturbation-based algorithms. Under this condition, the orthogonality constraint limits the variance of the node perturbation (NP) gradient estimates and enhances the convergence efficiency. Through extensive evaluations on multiple datasets, LOCO demonstrates the capability to locally train the deepest spiking neural networks to date (more than 10 layers), while exhibiting strong continual learning ability, improved convergence efficiency, and better task performance compared to other brain-inspired non-BP algorithms....