[2505.00624] FineScope : SAE-guided Data Selection Enables Domain Specific LLM Pruning and Finetuning
About this article
Abstract page for arXiv paper 2505.00624: FineScope : SAE-guided Data Selection Enables Domain Specific LLM Pruning and Finetuning
Computer Science > Computation and Language arXiv:2505.00624 (cs) [Submitted on 1 May 2025 (v1), last revised 27 Feb 2026 (this version, v3)] Title:FineScope : SAE-guided Data Selection Enables Domain Specific LLM Pruning and Finetuning Authors:Chaitali Bhattacharyya, Hyunsei Lee, Junyoung Lee, Shinhyoung Jang, Il hong Suh, Yeseong Kim View a PDF of the paper titled FineScope : SAE-guided Data Selection Enables Domain Specific LLM Pruning and Finetuning, by Chaitali Bhattacharyya and Hyunsei Lee and Junyoung Lee and Shinhyoung Jang and Il hong Suh and Yeseong Kim View PDF HTML (experimental) Abstract:Training large language models (LLMs) from scratch requires significant computational resources, driving interest in developing smaller, domain-specific LLMs that maintain both efficiency and strong task performance. Medium-sized models such as LLaMA, llama} have served as starting points for domain-specific adaptation, but they often suffer from accuracy degradation when tested on specialized datasets. We introduce FineScope, a framework for deriving compact, domain-optimized LLMs from larger pretrained models. FineScope leverages the Sparse Autoencoder (SAE) framework, inspired by its ability to produce interpretable feature representations, to extract domain-specific subsets from large datasets. We apply structured pruning with domain-specific constraints, ensuring that the resulting pruned models retain essential knowledge for the target domain. To further enhance performa...