[2505.12509] Revitalizing Black-Box Interpretability: Actionable Interpretability for LLMs via Proxy Models
About this article
Abstract page for arXiv paper 2505.12509: Revitalizing Black-Box Interpretability: Actionable Interpretability for LLMs via Proxy Models
Computer Science > Machine Learning arXiv:2505.12509 (cs) [Submitted on 18 May 2025 (v1), last revised 10 Apr 2026 (this version, v3)] Title:Revitalizing Black-Box Interpretability: Actionable Interpretability for LLMs via Proxy Models Authors:Junhao Liu, Haonan Yu, Zhenyu Yan, Xin Zhang View a PDF of the paper titled Revitalizing Black-Box Interpretability: Actionable Interpretability for LLMs via Proxy Models, by Junhao Liu and 3 other authors View PDF HTML (experimental) Abstract:Post-hoc explanations provide transparency and are essential for guiding model optimization, such as prompt engineering and data sanitation. However, applying model-agnostic techniques to Large Language Models (LLMs) is hindered by prohibitive computational costs, rendering these tools dormant for real-world applications. To revitalize model-agnostic interpretability, we propose a budget-friendly proxy framework that leverages efficient models to approximate the decision boundaries of expensive LLMs. We introduce a screen-and-apply mechanism to statistically verify local alignment before deployment. Our empirical evaluation confirms that proxy explanations achieve over 90% fidelity with only 11% of the oracle's cost. Building on this foundation, we demonstrate the actionable utility of our framework in prompt compression and poisoned example removal. Results show that reliable proxy explanations effectively guide optimization, transforming interpretability from a passive observation tool into a...