[2508.03716] FeynTune: Large Language Models for High-Energy Theory
About this article
Abstract page for arXiv paper 2508.03716: FeynTune: Large Language Models for High-Energy Theory
Computer Science > Computation and Language arXiv:2508.03716 (cs) [Submitted on 24 Jul 2025 (v1), last revised 27 Feb 2026 (this version, v2)] Title:FeynTune: Large Language Models for High-Energy Theory Authors:Paul Richmond, Prarit Agarwal, Borun Chowdhury, Vasilis Niarchos, Constantinos Papageorgakis View a PDF of the paper titled FeynTune: Large Language Models for High-Energy Theory, by Paul Richmond and 4 other authors View PDF HTML (experimental) Abstract:We present specialized Large Language Models for theoretical High-Energy Physics, obtained as 20 fine-tuned variants of the 8-billion parameter Llama-3.1 model. Each variant was trained on arXiv abstracts (through August 2024) from different combinations of hep-th, hep-ph and gr-qc. For a comparative study, we also trained models on datasets that contained abstracts from disparate fields such as the q-bio and cs categories. All models were fine-tuned using two distinct Low-Rank Adaptation fine-tuning approaches and varying dataset sizes, and outperformed the base model on hep-th abstract completion tasks. We compare performance against leading commercial LLMs (ChatGPT, Claude, Gemini, DeepSeek) and derive insights for further developing specialized language models for High-Energy Theoretical Physics. Comments: Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG); High Energy Physics - Theory (hep-th) Cite as: arXiv:2508.03716 [cs.CL] (or arXiv:2508.03716v2 [cs.CL] for this version) https://doi.o...