[2511.17561] LexInstructEval: Lexical Instruction Following Evaluation for Large Language Models
About this article
Abstract page for arXiv paper 2511.17561: LexInstructEval: Lexical Instruction Following Evaluation for Large Language Models
Computer Science > Computation and Language arXiv:2511.17561 (cs) [Submitted on 13 Nov 2025 (v1), last revised 23 Mar 2026 (this version, v2)] Title:LexInstructEval: Lexical Instruction Following Evaluation for Large Language Models Authors:Huimin Ren, Yan Liang, Baiqiao Su, Chaobo Sun, Hengtong Lu, Kaike Zhang, Chen Wei View a PDF of the paper titled LexInstructEval: Lexical Instruction Following Evaluation for Large Language Models, by Huimin Ren and 6 other authors View PDF HTML (experimental) Abstract:The ability of Large Language Models (LLMs) to precisely follow complex and fine-grained lexical instructions is a cornerstone of their utility and controllability. However, evaluating this capability remains a significant challenge. Current methods either rely on subjective and costly human evaluation or on automated LLM-as-a-judge systems, which suffer from inherent biases and unreliability. Existing programmatic benchmarks, while objective, often lack the expressiveness to test intricate, compositional constraints at a granular level. To address these limitations, we introduce LexInstructEval, a new benchmark and evaluation framework for fine-grained lexical instruction following. Our framework is built upon a formal, rule-based grammar that deconstructs complex instructions into a canonical <Procedure, Relation, Value> triplet. This grammar enables the systematic generation of a diverse dataset through a multi-stage, human-in-the-loop pipeline and facilitates objectiv...