[2411.05183] Why CNN Features Are not Gaussian: A Statistical Anatomy of Deep Representations

[2411.05183] Why CNN Features Are not Gaussian: A Statistical Anatomy of Deep Representations

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2411.05183: Why CNN Features Are not Gaussian: A Statistical Anatomy of Deep Representations

Computer Science > Computer Vision and Pattern Recognition arXiv:2411.05183 (cs) [Submitted on 7 Nov 2024 (v1), last revised 6 Apr 2026 (this version, v4)] Title:Why CNN Features Are not Gaussian: A Statistical Anatomy of Deep Representations Authors:David Chapman, Parniyan Farvardin View a PDF of the paper titled Why CNN Features Are not Gaussian: A Statistical Anatomy of Deep Representations, by David Chapman and 1 other authors View PDF HTML (experimental) Abstract:Deep convolutional neural networks (CNNs) are commonly analyzed through geometric and linear-algebraic perspectives, yet the statistical distribution of their internal feature activations remains poorly understood. In many applications, deep features are implicitly treated as Gaussian when modeling densities. In this work, we empirically examine this assumption and show that it does not accurately describe the distribution of CNN feature activations. Through a systematic study across multiple architectures and datasets, we find that the feature activations deviate substantially from Gaussian and are better characterized by Weibull and related long-tailed distributions. We further introduce a novel Discretized Characteristic Function Copula (DCF-Copula) method to model multivariate feature dependencies. We find that tail-length increases with network depth and that upper-tail dependence emerges between feature pairs. These statistical findings are not consistent with the Central Limit Theorem, and are instead ...

Originally published on April 08, 2026. Curated by AI News.

Related Articles

[2602.06869] Uncovering Cross-Objective Interference in Multi-Objective Alignment
Llms

[2602.06869] Uncovering Cross-Objective Interference in Multi-Objective Alignment

Abstract page for arXiv paper 2602.06869: Uncovering Cross-Objective Interference in Multi-Objective Alignment

arXiv - Machine Learning · 3 min ·
[2604.07401] Geometric Entropy and Retrieval Phase Transitions in Continuous Thermal Dense Associative Memory
Machine Learning

[2604.07401] Geometric Entropy and Retrieval Phase Transitions in Continuous Thermal Dense Associative Memory

Abstract page for arXiv paper 2604.07401: Geometric Entropy and Retrieval Phase Transitions in Continuous Thermal Dense Associative Memory

arXiv - Machine Learning · 4 min ·
[2512.14954] Cross-Tokenizer Likelihood Scoring Algorithms for Language Model Distillation
Llms

[2512.14954] Cross-Tokenizer Likelihood Scoring Algorithms for Language Model Distillation

Abstract page for arXiv paper 2512.14954: Cross-Tokenizer Likelihood Scoring Algorithms for Language Model Distillation

arXiv - Machine Learning · 4 min ·
[2507.12768] AnyPos: Automated Task-Agnostic Actions for Bimanual Manipulation
Machine Learning

[2507.12768] AnyPos: Automated Task-Agnostic Actions for Bimanual Manipulation

Abstract page for arXiv paper 2507.12768: AnyPos: Automated Task-Agnostic Actions for Bimanual Manipulation

arXiv - Machine Learning · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime