[2603.22017] AdditiveLLM2: A Multi-modal Large Language Model for Additive Manufacturing
About this article
Abstract page for arXiv paper 2603.22017: AdditiveLLM2: A Multi-modal Large Language Model for Additive Manufacturing
Computer Science > Machine Learning arXiv:2603.22017 (cs) [Submitted on 23 Mar 2026] Title:AdditiveLLM2: A Multi-modal Large Language Model for Additive Manufacturing Authors:Peter Pak, Amir Barati Farimani View a PDF of the paper titled AdditiveLLM2: A Multi-modal Large Language Model for Additive Manufacturing, by Peter Pak and 1 other authors View PDF HTML (experimental) Abstract:This work presents AdditiveLLM2 a multi-modal, domain adapted large language model built upon the instruction tuned variant of the Gemma 3 model using a relatively small dataset of around 50 million tokens. The dataset (AdditiveLLM2-OA) consists of open-access additive manufacturing journal articles with data extracted for the domain adaptive pretraining and visual instruction tuning processes. Various stages of the developed model are evaluated with the Additive-Manufacturing-Benchmark which consists of additive manufacturing domain specific tasks compiled published resources. AdditiveLLM2 exhibits proficiency in both language and vision based tasks, achieving accuracies upwards of 90% in general additive manufacturing knowledge. This domain adaptive pretraining and instruction tuning strategy outline an accessible specialization method for large language models to a domain such as additive manufacturing. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2603.22017 [cs.LG] (or arXiv:2603.22017v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2603.22017 Focus to learn more arXiv-i...