[2603.03328] StructLens: A Structural Lens for Language Models via Maximum Spanning Trees
About this article
Abstract page for arXiv paper 2603.03328: StructLens: A Structural Lens for Language Models via Maximum Spanning Trees
Computer Science > Computation and Language arXiv:2603.03328 (cs) [Submitted on 10 Feb 2026] Title:StructLens: A Structural Lens for Language Models via Maximum Spanning Trees Authors:Haruki Sakajo, Frederikus Hudi, Yusuke Sakai, Hidetaka Kamigaito, Taro Watanabe View a PDF of the paper titled StructLens: A Structural Lens for Language Models via Maximum Spanning Trees, by Haruki Sakajo and 4 other authors View PDF Abstract:Language exhibits inherent structures, a property that explains both language acquisition and language change. Given this characteristic, we expect language models to manifest internal structures as well. While interpretability research has investigated the components of language models, existing approaches focus on local inter-token relationships within layers or modules (e.g., Multi-Head Attention), leaving global inter-layer relationships largely overlooked. To address this gap, we introduce StructLens, an analytical framework designed to reveal how internal structures relate holistically through their inter-token connection within a layer. StructLens constructs maximum spanning trees based on the semantic representations in residual streams, analogous to dependency parsing, and leverages the tree properties to quantify inter-layer distance (or similarity) from a structural perspective. Our findings demonstrate that StructLens yields an inter-layer similarity pattern that is distinctively different from conventional cosine similarity. Moreover, this ...