[2605.07731] Benchmarking EngGPT2-16B-A3B against Comparable Italian and International Open-source LLMs
About this article
Abstract page for arXiv paper 2605.07731: Benchmarking EngGPT2-16B-A3B against Comparable Italian and International Open-source LLMs
Computer Science > Computation and Language arXiv:2605.07731 (cs) [Submitted on 8 May 2026] Title:Benchmarking EngGPT2-16B-A3B against Comparable Italian and International Open-source LLMs Authors:Andrea Sassella, Andrea Chizzola, Tommaso Bianchi, Luca Alessandrelli, Mark James Carman View a PDF of the paper titled Benchmarking EngGPT2-16B-A3B against Comparable Italian and International Open-source LLMs, by Andrea Sassella and 3 other authors View PDF Abstract:This report benchmarks the performance of ENGINEERING Ingegneria Informatica S.p.A.'s EngGPT2MoE-16B-A3B LLM, a 16B parameter Mixture of Experts (MoE) model with 3B active parameters. Performance is investigated across a wide variety of representative benchmarks, and is compared against comparably-sized open-source MoE and dense models. In comparison with popular Italian models, namely FastwebMIIA-7B, Minerva-7B, Velvet-14B, and LLaMAntino-3-ANITA-8B, EngGPT2MoE-16B-A3B performs as well or better on international benchmarks: ARC-Challenge, GSM8K, AIME24, AIME25, MMLU, and HumanEval (HE). It achieves the best performance for the longest context setting (32k) of the RULER benchmark. On the Italian benchmark dataset ITALIC, the model performs as well or better than the other models except for Velvet-14B, which outperforms it. Compared with popular MoE models of comparable size, the new model reports higher values than DeepSeek-MoE-16B-Chat on all considered benchmarks. It has higher values than Moonlight-16B-A3B on HE,...