[2602.21965] Compact Circulant Layers with Spectral Priors

[2602.21965] Compact Circulant Layers with Spectral Priors

arXiv - Machine Learning 3 min read Article

Summary

This paper explores compact circulant layers with spectral priors, focusing on their application in memory-efficient neural networks for resource-constrained environments.

Why It Matters

As machine learning applications expand into areas like medicine and robotics, the need for efficient neural networks that are both compact and uncertainty-aware becomes critical. This research presents innovative methods to enhance neural network performance while reducing resource consumption, making it relevant for developers and researchers in the field.

Key Takeaways

  • Compact spectral circulant layers can significantly reduce the number of parameters in neural networks.
  • The proposed methods enable structured variational inference, improving robustness diagnostics.
  • Empirical results show that these layers perform comparably to strong baselines while being more resource-efficient.

Computer Science > Machine Learning arXiv:2602.21965 (cs) [Submitted on 25 Feb 2026] Title:Compact Circulant Layers with Spectral Priors Authors:Joseph Margaryan, Thomas Hamelryck View a PDF of the paper titled Compact Circulant Layers with Spectral Priors, by Joseph Margaryan and Thomas Hamelryck View PDF HTML (experimental) Abstract:Critical applications in areas such as medicine, robotics and autonomous systems require compact (i.e., memory efficient), uncertainty-aware neural networks suitable for edge and other resource-constrained deployments. We study compact spectral circulant and block-circulant-with-circulant-blocks (BCCB) layers: FFT-diagonalizable circular convolutions whose weights live directly in the real FFT (RFFT) half (1D) or half-plane (2D). Parameterizing filters in the frequency domain lets us impose simple spectral structure, perform structured variational inference in a low-dimensional weight space, and calculate exact layer spectral norms, enabling inexpensive global Lipschitz bounds and margin-based robustness diagnostics. By placing independent complex Gaussians on the Hermitian support we obtain a discrete instance of the spectral representation of stationary kernels, inducing an exact stationary Gaussian-process prior over filters on the discrete circle/torus. We exploit this to define a practical spectral prior and a Hermitian-aware low-rank-plus-diagonal variational posterior in real coordinates. Empirically, spectral circulant/BCCB layers are...

Related Articles

Machine Learning

[P] Unix philosophy for ML pipelines: modular, swappable stages with typed contracts

We built an open-source prototype that applies Unix philosophy to retrieval pipelines. Each stage (PII redaction, chunking, dedup, embedd...

Reddit - Machine Learning · 1 min ·
Machine Learning

Making an AI native sovereign computational stack

I’ve been working on a personal project that ended up becoming a kind of full computing stack: identity / trust protocol decentralized ch...

Reddit - Artificial Intelligence · 1 min ·
Llms

An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I published a paper today on something I've been calling postural manipulation. The short version: ordi...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

What tools are sr MLEs using? (clawdbot, openspec, wispr) [D]

I'm already blasting cursor, but I want to level up my output. I heard that these kind of AI tools and workflows are being asked in SF. W...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime