[2510.15520] Discovering Intersectional Bias via Directional Alignment in Face Recognition Embeddings
About this article
Abstract page for arXiv paper 2510.15520: Discovering Intersectional Bias via Directional Alignment in Face Recognition Embeddings
Computer Science > Computer Vision and Pattern Recognition arXiv:2510.15520 (cs) [Submitted on 17 Oct 2025 (v1), last revised 20 Mar 2026 (this version, v2)] Title:Discovering Intersectional Bias via Directional Alignment in Face Recognition Embeddings Authors:Ignacio Serna View a PDF of the paper titled Discovering Intersectional Bias via Directional Alignment in Face Recognition Embeddings, by Ignacio Serna View PDF HTML (experimental) Abstract:Modern face recognition models embed identities on a unit hypersphere, where identity variation forms tight clusters. Conversely, shared semantic attributes can often be effectively approximated as linear directions in the latent space. Existing bias evaluation methods rely on predefined attribute labels, synthetic counterfactuals, or proximity-based clustering, all of which fail to capture intersectional subpopulations that emerge along latent directions. We introduce LatentAlign, an attribute-free algorithm that discovers semantically coherent and interpretable subpopulations by iteratively aligning embeddings along dominant latent directions. Unlike distance-based clustering, LatentAlign exploits the geometry of hyperspherical embeddings to isolate directional structures shared across identities, allowing for the interpretable discovery of attributes. Across four state-of-the-art recognition backbones (ArcFace, CosFace, ElasticFace, PartialFC) and two benchmarks (RFW, CelebA), LatentAlign consistently yields more semantically c...