[2602.19331] Partial Soft-Matching Distance for Neural Representational Comparison with Partial Unit Correspondence

[2602.19331] Partial Soft-Matching Distance for Neural Representational Comparison with Partial Unit Correspondence

arXiv - Machine Learning 4 min read Article

Summary

The paper introduces a Partial Soft-Matching Distance metric for neural representational comparison, enhancing robustness against noise and outliers in neural networks.

Why It Matters

This research addresses limitations in existing representational similarity metrics by allowing for unmatched neurons, which improves alignment precision and reduces the impact of noise in neural data analysis. It has significant implications for neuroscience and machine learning applications, particularly in improving model interpretability and reliability.

Key Takeaways

  • Introduces a Partial Soft-Matching Distance for neural comparisons.
  • Enhances robustness against noise and outliers in neural data.
  • Allows for unmatched neurons, improving alignment precision.
  • Demonstrates practical benefits in fMRI data analysis.
  • Facilitates focused analyses on match quality in deep networks.

Computer Science > Machine Learning arXiv:2602.19331 (cs) [Submitted on 22 Feb 2026] Title:Partial Soft-Matching Distance for Neural Representational Comparison with Partial Unit Correspondence Authors:Chaitanya Kapoor, Alex H. Williams, Meenakshi Khosla View a PDF of the paper titled Partial Soft-Matching Distance for Neural Representational Comparison with Partial Unit Correspondence, by Chaitanya Kapoor and 2 other authors View PDF HTML (experimental) Abstract:Representational similarity metrics typically force all units to be matched, making them susceptible to noise and outliers common in neural representations. We extend the soft-matching distance to a partial optimal transport setting that allows some neurons to remain unmatched, yielding rotation-sensitive but robust correspondences. This partial soft-matching distance provides theoretical advantages -- relaxing strict mass conservation while maintaining interpretable transport costs -- and practical benefits through efficient neuron ranking in terms of cross-network alignment without costly iterative recomputation. In simulations, it preserves correct matches under outliers and reliably selects the correct model in noise-corrupted identification tasks. On fMRI data, it automatically excludes low-reliability voxels and produces voxel rankings by alignment quality that closely match computationally expensive brute-force approaches. It achieves higher alignment precision across homologous brain areas than standard so...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
Machine Learning

[P] SpeakFlow - AI Dialogue Practice Coach with GLM 5.1

Built SpeakFlow for the Z.AI Builder Series hackathon. AI dialogue practice coach that evaluates your spoken responses in real-time. Two ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] ICML Anonymized git repos for rebuttal

A number of the papers I'm reviewing for have submitted additional figures and code through anonymized git repos (e.g. https://anonymous....

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime