[2603.21014] CLT-Forge: A Scalable Library for Cross-Layer Transcoders and Attribution Graphs
About this article
Abstract page for arXiv paper 2603.21014: CLT-Forge: A Scalable Library for Cross-Layer Transcoders and Attribution Graphs
Computer Science > Machine Learning arXiv:2603.21014 (cs) [Submitted on 22 Mar 2026] Title:CLT-Forge: A Scalable Library for Cross-Layer Transcoders and Attribution Graphs Authors:Florent Draye, Abir Harrasse, Vedant Palit, Tung-Yu Wu, Jiarui Liu, Punya Syon Pandey, Roderick Wu, Terry Jingchen Zhang, Zhijing Jin, Bernhard Schölkopf View a PDF of the paper titled CLT-Forge: A Scalable Library for Cross-Layer Transcoders and Attribution Graphs, by Florent Draye and 9 other authors View PDF HTML (experimental) Abstract:Mechanistic interpretability seeks to understand how Large Language Models (LLMs) represent and process information. Recent approaches based on dictionary learning and transcoders enable representing model computation in terms of sparse, interpretable features and their interactions, giving rise to feature attribution graphs. However, these graphs are often large and redundant, limiting their interpretability in practice. Cross-Layer Transcoders (CLTs) address this issue by sharing features across layers while preserving layer-specific decoding, yielding more compact representations, but remain difficult to train and analyze at scale. We introduce an open-source library for end-to-end training and interpretability of CLTs. Our framework integrates scalable distributed training with model sharding and compressed activation caching, a unified automated interpretability pipeline for feature analysis and explanation, attribution graph computation using Circuit-Trac...