[2603.22665] Improving LLM Predictions via Inter-Layer Structural Encoders
About this article
Abstract page for arXiv paper 2603.22665: Improving LLM Predictions via Inter-Layer Structural Encoders
Computer Science > Computation and Language arXiv:2603.22665 (cs) [Submitted on 24 Mar 2026] Title:Improving LLM Predictions via Inter-Layer Structural Encoders Authors:Tom Ulanovski (1), Eyal Blyachman (1), Maya Bechler-Speicher (2) ((1) Tel Aviv University, (2) Meta) View a PDF of the paper titled Improving LLM Predictions via Inter-Layer Structural Encoders, by Tom Ulanovski (1) and 3 other authors View PDF HTML (experimental) Abstract:The standard practice in Large Language Models (LLMs) is to base predictions on the final-layer token representations. Recent studies, however, show that intermediate layers encode substantial information, which may contain more task-relevant features than the final-layer representations alone. Importantly, it was shown that for different tasks, different layers may be optimal. In this work we introduce Inter-Layer Structural Encoders (ILSE), a powerful structural approach to learn one effective representation from the LLM's internal layer representations all together. Central to ILSE is Cayley-Encoder, a mathematically grounded geometric encoder that leverages expander Cayley graphs for efficient inter-layer information propagation. We evaluate ILSE across 13 classification and semantic similarity tasks with 9 pre-trained LLMs ranging from 14 million to 8 billion parameters. ILSE consistently outperforms baselines and existing approaches, achieving up to 44% improvement in accuracy and 25% in similarity metrics. We further show that ILSE...