[2602.15021] Generalization from Low- to Moderate-Resolution Spectra with Neural Networks for Stellar Parameter Estimation: A Case Study with DESI
Summary
This article explores the use of neural networks for stellar parameter estimation, focusing on the transfer of data from low- to moderate-resolution spectra using a case study with DESI.
Why It Matters
Understanding how to generalize stellar spectral analysis across different resolutions is crucial for astrophysics. This research demonstrates the effectiveness of neural networks in improving parameter estimation, which can enhance our understanding of stellar populations and their chemical compositions.
Key Takeaways
- Neural networks can effectively generalize from low- to moderate-resolution spectra.
- Pre-trained multilayer perceptrons (MLPs) show strong performance without extensive fine-tuning.
- Fine-tuning strategies vary in effectiveness depending on the specific stellar parameter being estimated.
Astrophysics > Solar and Stellar Astrophysics arXiv:2602.15021 (astro-ph) [Submitted on 16 Feb 2026] Title:Generalization from Low- to Moderate-Resolution Spectra with Neural Networks for Stellar Parameter Estimation: A Case Study with DESI Authors:Xiaosheng Zhao, Yuan-Sen Ting, Rosemary F.G. Wyse, Alexander S. Szalay, Yang Huang, László Dobos, Tamás Budavári, Viska Wei View a PDF of the paper titled Generalization from Low- to Moderate-Resolution Spectra with Neural Networks for Stellar Parameter Estimation: A Case Study with DESI, by Xiaosheng Zhao and 7 other authors View PDF HTML (experimental) Abstract:Cross-survey generalization is a critical challenge in stellar spectral analysis, particularly in cases such as transferring from low- to moderate-resolution surveys. We investigate this problem using pre-trained models, focusing on simple neural networks such as multilayer perceptrons (MLPs), with a case study transferring from LAMOST low-resolution spectra (LRS) to DESI medium-resolution spectra (MRS). Specifically, we pre-train MLPs on either LRS or their embeddings and fine-tune them for application to DESI stellar spectra. We compare MLPs trained directly on spectra with those trained on embeddings derived from transformer-based models (self-supervised foundation models pre-trained for multiple downstream tasks). We also evaluate different fine-tuning strategies, including residual-head adapters, LoRA, and full fine-tuning. We find that MLPs pre-trained on LAMOST L...