[2603.11583] UtilityMax Prompting: A Formal Framework for Multi-Objective Large Language Model Optimization
About this article
Abstract page for arXiv paper 2603.11583: UtilityMax Prompting: A Formal Framework for Multi-Objective Large Language Model Optimization
Computer Science > Computation and Language arXiv:2603.11583 (cs) [Submitted on 12 Mar 2026 (v1), last revised 26 Mar 2026 (this version, v2)] Title:UtilityMax Prompting: A Formal Framework for Multi-Objective Large Language Model Optimization Authors:Ofir Marom View a PDF of the paper titled UtilityMax Prompting: A Formal Framework for Multi-Objective Large Language Model Optimization, by Ofir Marom View PDF HTML (experimental) Abstract:The success of a Large Language Model (LLM) task depends heavily on its prompt. Most use-cases specify prompts using natural language, which is inherently ambiguous when multiple objectives must be simultaneously satisfied. In this paper we introduce UtilityMax Prompting, a framework that specifies tasks using formal mathematical language. We reconstruct the task as an influence diagram in which the LLM's answer is the sole decision variable. A utility function is defined over the conditional probability distributions within the diagram, and the LLM is instructed to find the answer that maximises expected utility. This constrains the LLM to reason explicitly about each component of the objective, directing its output toward a precise optimization target rather than a subjective natural language interpretation. We validate our approach on the MovieLens 1M dataset across three frontier models (Claude Sonnet 4.6, GPT-5.4, and Gemini 2.5 Pro), demonstrating consistent improvements in precision and Normalized Discounted Cumulative Gain (NDCG) o...