[2509.21628] Comparing and Integrating Different Notions of Representational Correspondence in Neural Systems

[2509.21628] Comparing and Integrating Different Notions of Representational Correspondence in Neural Systems

arXiv - AI 4 min read Article

Summary

This article explores the integration of various representational similarity metrics in neural systems, assessing their effectiveness in recovering meaningful structures in both artificial models and biological data.

Why It Matters

Understanding representational correspondence is crucial in neuroscience and machine learning, as it helps to clarify how different neural systems process information. This study advances the field by providing a method to integrate multiple similarity metrics, enhancing our understanding of neural representations.

Key Takeaways

  • Different representational similarity metrics highlight distinct aspects of neural correspondence.
  • Metrics preserving representational geometry yield better separation of model families and neural data.
  • Integrating multiple metrics through Similarity Network Fusion enhances clarity in hierarchical organization of neural data.

Computer Science > Computer Vision and Pattern Recognition arXiv:2509.21628 (cs) [Submitted on 25 Sep 2025 (v1), last revised 21 Feb 2026 (this version, v4)] Title:Comparing and Integrating Different Notions of Representational Correspondence in Neural Systems Authors:Jialin Wu, Shreya Saha, Yiqing Bo, Meenakshi Khosla View a PDF of the paper titled Comparing and Integrating Different Notions of Representational Correspondence in Neural Systems, by Jialin Wu and 3 other authors View PDF HTML (experimental) Abstract:The extent to which different biological and artificial neural systems rely on equivalent internal representations to support similar tasks remains a central question in neuroscience and machine learning. Prior work typically compares systems using a single representational similarity metric, even though different metrics emphasize distinct facets of representational correspondence. Here we evaluate a suite of representational similarity measures by asking how well each metric recovers known structure across two domains: for artificial models, whether procedurally dissimilar models (differing in architecture or training paradigm) are assigned lower similarity than procedurally matched models; and for neural data, whether responses from distinct cortical regions are separated while responses from the same region align across subjects. Across both vision models and neural recordings, metrics that preserve representational geometry or tuning structure more reliably...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
[2603.23899] SM-Net: Learning a Continuous Spectral Manifold from Multiple Stellar Libraries
Machine Learning

[2603.23899] SM-Net: Learning a Continuous Spectral Manifold from Multiple Stellar Libraries

Abstract page for arXiv paper 2603.23899: SM-Net: Learning a Continuous Spectral Manifold from Multiple Stellar Libraries

arXiv - AI · 4 min ·
[2603.16629] MLLM-based Textual Explanations for Face Comparison
Llms

[2603.16629] MLLM-based Textual Explanations for Face Comparison

Abstract page for arXiv paper 2603.16629: MLLM-based Textual Explanations for Face Comparison

arXiv - AI · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime