[2602.22971] SPM-Bench: Benchmarking Large Language Models for Scanning Probe Microscopy
Summary
The paper presents SPM-Bench, a benchmark for evaluating large language models in scanning probe microscopy, addressing gaps in existing benchmarks and proposing a novel evaluation metric.
Why It Matters
SPM-Bench is significant as it fills a critical gap in benchmarking LLMs for specialized scientific applications, ensuring better evaluation of AI capabilities in complex physical scenarios. This advancement could enhance research efficiency and accuracy in scientific fields reliant on scanning probe microscopy.
Key Takeaways
- SPM-Bench targets benchmarking LLMs specifically for scanning probe microscopy.
- Introduces a fully automated data synthesis pipeline to reduce costs and improve dataset quality.
- Presents the Strict Imperfection Penalty F1 (SIP-F1) score for evaluating LLM performance.
- Correlates model performance with reported confidence and perceived difficulty.
- Establishes a framework for automated scientific data synthesis applicable across various domains.
Computer Science > Artificial Intelligence arXiv:2602.22971 (cs) [Submitted on 26 Feb 2026] Title:SPM-Bench: Benchmarking Large Language Models for Scanning Probe Microscopy Authors:Peiyao Xiao, Xiaogang Li, Chengliang Xu, Jiayi Wang, Ben Wang, Zichao Chen, Zeyu Wang, Kejun Yu, Yueqian Chen, Xulin Liu, Wende Xiao, Bing Zhao, Hu Wei View a PDF of the paper titled SPM-Bench: Benchmarking Large Language Models for Scanning Probe Microscopy, by Peiyao Xiao and 12 other authors View PDF HTML (experimental) Abstract:As LLMs achieved breakthroughs in general reasoning, their proficiency in specialized scientific domains reveals pronounced gaps in existing benchmarks due to data contamination, insufficient complexity, and prohibitive human labor costs. Here we present SPM-Bench, an original, PhD-level multimodal benchmark specifically designed for scanning probe microscopy (SPM). We propose a fully automated data synthesis pipeline that ensures both high authority and low-cost. By employing Anchor-Gated Sieve (AGS) technology, we efficiently extract high-value image-text pairs from arXiv and journal papers published between 2023 and 2025. Through a hybrid cloud-local architecture where VLMs return only spatial coordinates "llbox" for local high-fidelity cropping, our pipeline achieves extreme token savings while maintaining high dataset purity. To accurately and objectively evaluate the performance of the LLMs, we introduce the Strict Imperfection Penalty F1 (SIP-F1) score. This m...